Part 1: Is the Next round of the Software Revolution (Driven by HW Innovation) just around the corner?
We live in an interesting time where we see the technological innovation is happening much faster than ever before.
This is driven by the constant growth requirement, things to be faster, with the extreme volume of data, the simplicity, and the cost!
Many software-driven solutions emerged in recent years to the masses of hardware for a very cost-effective, easy-to-use and infrastructure. While in the classical workloads (databases, OLTP, OLAP, exchange, etc.), neither the company data at the shops in the city centre, the new workload (object, NoSQL, etc.) you’ve seen the rapid adoption. Emerging analytics solutions for deeper knowledge and better decisions, quickly/massively growing data (big data). Hypervisors, different vendors have greatly simplified the management of the variety of work loads and maximizes the use of hardware. Public providers of cloud (Amazon, Azure, etc.) and private/converged cloud provider (VCE, Nutanix, etc.) have thrown in the perfection Hypervisor management applications for the scalability of the software Pi in off-the-shelf hardware for the deployment of the infrastructure, where workloads are deployed and can be done with just a few clicks, so to a great extent the task of the data center simplifies the administrators integrated. The Software-Defined Data Center is not just a buzzword, what is happening now. Users are changing, from the construction of their own infrastructure, regardless of the purchase of servers, Switches, storage, software, either public or private clouds, where resources are already integrated and ready to go!
These changes create an exciting time for all in the data-centre!
But New Disruptive innovations Happen in the Hardware…
To pass during this first wave of software aimed at innovation, that is on commodity hardware ‘today’, the sequel to, and maturation – has started a fundamental change in the underlying hardware technologies. These new hardware technologies are very worrying. As the transition from bare ideas to real products – a new wave of software innovation is inevitable. This new hardware is not only the first signs of the huge advantages for the applications of today, but also the detection of the new application. There is a lot of excitement around the arrival of persistent memory (Xpoint, etc.), low latency interconnect product/solutions (RoCE, etc.), low overhead container technologies and the recognition of new roles for the FPGA/GPU. All these technologies are to advance towards a same goal the acceleration of the load of work in a cost effective manner.
So… What Does It Mean For Software And What Is The Solution To The Opportunities That Are Presented?
Since most of these hardware components are making their way into the eco-system, which also shows the need for the software to develop the stock. To consume the pile of software to adapt and be able to have a “combination” of these new components for the spectacular improvement of the workload.
To avoid this, we take a look at an example of two of these hardware innovation and potential of the shortcomings of the current software stack that its full exploitation. The new “persistent memory” and ” low latency of the network interconnect technologies are promising, in that the building will be of a rack with the following ingredients it is possible that in the near future:
Large persistent memory (for storage) with 1µsec” latency
Interconnection network with 1µsec” latency
This is an order of magnitude better than the sum of the latencies (100 µs), which exist today, for the corresponding components in a rack. Therefore, imagine the impact if the access to the persistent data, both ‘in’ and ‘on’ compute-nodes, super-efficient. It is very worrying! These have the potential to accelerate many of today’s loads of work (5X/10X/20X acceleration?) regardless of whether they are single-threaded (1 queue depth) or multi-task (multi-queue depth). This means that a framework built with these skills, many more loads of work (and faster) than today, in an equivalent footprint. This has a significant impact on the business agility, energy-saving, real estate, etc. But that is all. The new models of access to the storage (persistent memory and the low latency of the network interconnect), the promise of improving dramatically simplify the programming of quite a few applications.
These innovations have had a greater impact on the burden of work as all-flash arrays, as it came in the data center!
However, the software stack is not ready today, to really take advantage of the benefits of this impending break up of hardware. The overhead of the current system software stack (of E / a-path and data-services-step) masks the benefits that these technologies offer. A research paper by the Georgia Institute of Technology (systems and applications for storing Persistent), notes:
“… Research has shown that, as a storage of the software that tend to be of the head, is faster, the dominant source of expense, therefore, a rethinking of the software stacks [105 require]. As already mentioned, the traditional storage stack, assume that the memory in another address space, and work on a block device abstraction. You implement the layers in between, such as the cache of the page to the stage data. In the case of the use of the PM (permanent memory), as a multi-layer design leads to unnecessary copies and translations in the software. To remove it it is possible to this Overload, to avoid them completely in the page cache and the block-level of abstraction. Low overhead (but managed access to the AFTERNOON, it is fundamental to ensure that the applications that make use of the full potential of the see… ”
These hardware components are… and this is the ‘goods’ at some point. The solution stack of Software (especially the E / a of the route and data-services-step) problems of today are a very important way. Since these software components and are not available in ‘total useful product” form, to offer the innovation, these capabilities in a single integrated product is a great opportunity. Someone will have to go back a step and build a solution at the same time, these discrete but related parts, the innovation stick “finished product” – form.
The need is, therefore, to a user-consumable material for the Integration of these “new” components, with the corresponding changes in the software stack efficiently, the benefits of these components is transparent to the loads of work, and also creates the provision of a framework for the “new” workloads.
Well… Very Few of the research and efforts Are Already in the works…
Several open source initiatives are in the game, and many companies are working together to standardize interfaces, and displays the results in the benefits for different workloads. Many possible solutions, and the burden of work-transitions will be discussed.
Persistent Memory Programming Model
“For many years, computer applications, organize your data between the two levels: memory and storage space. We believe that the emerging persistent memory technologies, the introduction of a third layer. Persistent memory (or pmem) is accessed, such as volatile memory, processor load and store instructions, but which retains its content even after the loss of power, such as storage.”
Georgia Institute of Technology
SYSTEMS AND APPLICATIONS FOR STORING PERSISTENT
“Emerging non-volatile (or persistent) memories bridge the performance and capacity gap between the memory and the memory, so that the introduction of a new level. To use all the potential of the future, hybrid memory systems, the coupling of the DRAM with the clock, we need to build a new software system, and the application of mechanisms that enable the optimal use of the watch as a fast storage and scalable low-cost (but slower) memory”
“A new programming model for persistent memory (PM) – NVM hardware designed to be treated by software similarly to system memory”
So… What products/solutions and markets, We are Talking about?
Momentum is building, and recognition is growing about the existence and potential of these innovations, as you make your way in the market and the expectation that you the ‘goods’ hardware in the future. It takes time for the data center of the ecosystem to change to embrace, unless the change is transparent, in a radical way, the architectures that exist for you to improve and for simple experiments.
Consequently, to really change things and move things faster, it is a very good possibility for the implementation of a solution for products software stack built with these innovative components, in which the solution:
Tangible and transparent -offers the advantage of loads of work and very fast paced.
Creates opportunities to simplify the experiments and the deployment of the recent work load due to the support of the new open standards, and allow new applications to be developed on the platform.