PALO ALTO, CA--(Marketwired - Sep 30, 2013) - Cloudera, the leader in enterprise analytic data management powered by Apache Hadoop™, today announced its formal support for, and integration with, Apache Accumulo, a highly distributed, massively parallel processing database that is capable of analyzing structured and unstructured data and delivers fine-grained user access control and authentication. Accumulo uniquely enables system administrators to assign data access at the cell-level, ensuring that only authorized users can view and manipulate individual data points. This increased control allows a database to be accessed by a maximum number of users, while remaining compliant with data privacy and security regulations. The company stated that it plans to bolster its support for the open source project, which is managed by the Apache Software Foundation (ASF), devoting significant internal engineering resources to speed Accumulo's development.
"Accumulo makes it possible to control data visibility and access at the cell level, offering a superior alternative for our customers in the US intelligence community and federal sector," said Mike Olson, chief strategy officer at Cloudera. "It offers a strong complement to HBase, which has been part of our CDH offering since 2010, and remains the dominant high-performance delivery engine for NoSQL workloads running on Hadoop. However, Accumulo was expressly built to augment sensitive data workloads with fine-grained user access and authentication controls that are of mission-critical importance for federal and highly regulated industries. Bringing ironclad security to Hadoop remains a top priority for Cloudera, and Accumulo is a significant addition to the security capabilities of the platform. We will continue to invest in further development of Accumulo in collaboration with the Apache open source community and to ensure tight integration with our full product stack for our customers."
Apache Accumulo: Protecting Sensitive Data at the Source
Large amounts of data flow through Federal agencies every day. The velocity, variety and frequency of this flow is increasing, and being able to make sense of seemingly random information is critical for Federal agencies to make strategic and mission critical decisions for the nation's wellbeing.
Apache Accumulo was designed to store massive amounts of data across an array of different machines. Originally built from Google's BigTable data model by the National Security Agency (NSA), Accumulo uniquely enables data administrators to assign user access to information at the cell-level, ensuring only those who meet the required security clearance and privacy permissions can view and manipulate it, making it ideal for securely storing classified information. As a lynchpin of the Apache Hadoop ecosystem, Accumulo is fully integrated with Apache Hadoop, Apache Zookeeper, Apache Thrift and Apache Pig.
"Accumulo was purpose-built to address the very specific security needs of the Federal government and US intelligence communities, making it a natural and highly complementary extension to our efforts to deliver military-grade security for Hadoop," said Tom Reilly, chief executive officer, Cloudera. "We are decisively all-in on Accumulo and will be working closely with our partners and customers worldwide to further evolve the platform and increase its functionality, performance and reliability, while ensuring tight integration with our product suite. Bringing the power of Accumulo into our Platform for Big Data will enable both our public and private sector customers to achieve new levels of security and compliance from Apache Hadoop and CDH."
About Apache Accumulo
The Apache Accumulo™ sorted, distributed key/value store is a robust, scalable, high performance data storage and retrieval system. Apache Accumulo is based on Google's BigTable design and is built on top of Apache Hadoop, Zookeeper, and Thrift. Apache Accumulo offers novel improvements on the BigTable design in the form of cell-based access control and a server-side programming mechanism that can modify key/value pairs at various points in the data management process.
Founded in 2008, Cloudera pioneered the business case for Hadoop with CDH, the world's most comprehensive, thoroughly tested and widely deployed 100% open source distribution of Apache Hadoop in both commercial and non-commercial environments. Now, the company is redefining data management with its Platform for Big Data, Cloudera Enterprise, empowering enterprises to Ask Bigger Questions™ and gain rich, actionable insights from all their data, to quickly and easily derive real business value that translates into competitive advantage. As the top contributor to the Apache open source community and leading educator of data professionals with the broadest array of Hadoop training and certification programs, Cloudera also offers comprehensive consulting services. Over 700 partners across hardware, software and services have teamed with Cloudera to help meet organizations' big data goals. With tens of thousands of nodes under management and hundreds of customers across diverse markets, Cloudera is the category leader that has set the standard for Hadoop in the enterprise. www.cloudera.com
Connect with Cloudera
Read our blog: http://www.cloudera.com/blog/
Follow us on Twitter: http://twitter.com/cloudera
Visit us on Facebook: http://www.facebook.com/cloudera