SOURCE: RNA Networks

December 15, 2009 10:00 ET

RNA Networks Predicts Memory Virtualization Will Be a Top Technology to Watch in 2010

Memory Virtualization Will Be as Widely Accepted in the Data Center as Server, Desktop and Storage Virtualization

PORTLAND, OR--(Marketwire - December 15, 2009) - RNA Networks, the leader in memory virtualization software that transforms server memory into a shared network resource, today announced the top three trends fueling memory virtualization's expected growth in 2010.

IT professionals agree that memory is a critical resource constraint in the data center with both economic and operational impact. Application performance suffers greatly when a memory-intensive application needs more memory than is available. Therefore, each year, IT spends millions of dollars trying to address the problem by adding memory to servers and upgrading processors. However, IT is finding that when compared to workaround memory solutions, memory virtualization can more effectively reduce IT infrastructure costs and complexity and extend the life of their infrastructure investments.

"In 2010, we expect memory virtualization to follow server, desktop and storage virtualization and become widely accepted in the data center," said Clive Cook, CEO, RNA networks. "In 2009, we saw a number of vendors like Cisco with UCS and HP with Matrix address the problem of memory limitations in the data center and attention to the age-old problem of memory limitations will only heat up as data volumes continue to expand."

Memory virtualization will be one of the top technologies to watch in the coming year. Three specific trends are fueling this growth:

1. Storage will Run out of Headroom

Storage is often looked at as a way to address I/O bottlenecks and memory capacity limitations, but this approach does not address the core I/O issue without introducing significant latency and throughput implications. Faster storage can only marginally improve application performance, whereas memory is 100 times faster than storage and is 'application-aware.' Meaning, memory operates at the application layer where it can more easily and quickly interact with active data -- providing orders of magnitude increases in utilization, performance and speed. Regardless of the type of storage (spinning or SSD) or the acceleration options (NFS, storage caching), memory will remain the fastest option.

2. Data Volumes will Outpace Available Server Memory

The rate of proliferation and amount of data in the data center that must be quickly analyzed and interpreted is growing exponentially. Memory density on a single box is not keeping pace, and the density improvements available (16GB DIIMS for example) are costly. Methods to make data available to distributed servers are expensive in terms of programming complexity, bandwidth and IT oversight. Not to mention the scaling issues. As data volumes continue to grow, data will outpace available memory on a server. Making memory a pooled shared resource across servers is the most effective answer.

3. Memory as a Service in the Cloud

The success of a cloud, whether hosted or within the enterprise, hinges on its ability to ensure SLAs and manage costs by harnessing as much infrastructure utilization as possible. Tools to provision servers, storage, and I/O bandwidth are common in the cloud. Today, memory is only available as a server component and therefore must be provisioned and managed separately from processor capacity. But in practice, many application workloads are either highly compute bound (requiring minimal memory) or highly IO bound (requiring minimal CPU power but access to large data spaces). Yet these two resources are tightly coupled in terms of their provisioning and management, resulting in a "lumpy" allocation of resources and suboptimal utilization. Memory virtualization breaks the fixed relationship between memory and servers, giving cloud providers far more flexibility and scale. Pools of networked memory are fundamental requirements for clusters, and will be a key component of the cloud infrastructure.

RNA networks redefines the way memory is accessed and utilized in a data center. Instead of loading up servers with more expensive storage or servers, RNA networks logically separates memory from the physical resource, and makes a shared network resource across the data center. For more information, please visit

About RNA networks

RNA networks' memory virtualization technology transforms server memory into a shared network resource based on the company's flagship Memory Virtualization Platform. RNA networks products include RNAmessenger for low latency applications, and RNAcache for large working data sets. The company is based in Portland, Oregon with offices in Silicon Valley. RNA networks was founded by enterprise software and hardware industry veterans in 2006. For more information, visit