HPE reveals a prototype of futuristic The Machine, but without cutting-edge memristors | PCWorld

In 2004, Hewlett-Packard Enterprise’s Kirk Bresniker set out to make radical changes to computer architecture with The Machine and drew out the first concept design on a whiteboard.

At the time Bresniker, now chief architect at HP Labs, wanted to build a system that could drive computing into the future. The goal was to build a computer that used cutting-edge technologies like memristors and photonics.

It’s been an arduous journey, but HPE on Tuesday finally showed a prototype of The Machine at a lab in Fort Collins, Colorado.

It’s not close to what the company envisioned with The Machine when it was first announced in 2014 but follows the same principle of pushing computing into memory subsystems. The system breaks the limitations tied to conventional PC and server architecture in which memory is a bottleneck.

The standout feature in the mega server is the 160TB of memory capacity. No single server today can boast that memory capacity. It has more than three times the memory capacity of HPE’s Superdome X.

The Machine runs 1,280 Cavium ARM CPU cores. The memory and 40 32-core ARM chips—broken up into four Apollo 6000 enclosures—are linked via a super fast fabric interconnect. The interconnect is like a data superhighway on which multiple co-processors can be plugged in.

The connections are designed in a mesh network so memory and processor nodes can easily communicate with each other. FPGAs provide the controller logic for the interconnect fabric.

Computers will deal with huge amounts of information in the future and The Machine will be prepared for that influx, Bresniker said.

In a way, The Machine prepares computers for when Moore’s Law runs out of steam, he said. It’s becoming tougher to cram more transistors and features into chips, and The Machine is a distributed system that breaks up processing among multiple resources.

The Machine is also ready for futuristic technologies. Slots in The Machine allow the addition of photonics connectors, which will connect to the new fabric linking up storage, memory, and processors. The interconnect itself is an early implementation of the Gen-Z interconnect, which is backed by major hardware, chip, storage, and memory makers.

HPE is improving the memory subsystem and storage in PCs and servers, which is giving a boost to computing. While data is being processed faster inside memory and storage, it reduces the need to speed up instructions-per-clock in CPUs.

In-memory computing has sped up applications like databases and ERP systems, and HPE is blowing up the design of such systems. There’s also a move to decoupling memory and storage from main servers. That helps speed up computing and makes more efficient use of data center resources like cooling.

There have been some glitches, though. The initial model of The Machine was supposed to have memristors, a type of memory and storage that could help computers make decisions based on data they retain. HP announced memristor in 2008, but it has been delayed multiple times. The company is now developing technology with Western Digital, Bresniker said.

Bresniker is taking an open-source approach to the development of The Machine, with the ethos of cooperation among partners to build such systems in the future. This system is a prototype that will drive the development and implementation of Gen-Z and of circuits that can be used as co-processors.

While HPE is trying to build a new system, Intel is coming from another angle with its 3D Xpoint storage and memory. System makers will try to build faster computers around Intel’s 3D Xpoint-based Optane storage, which the chipmaker says will eventually replace DRAM and SSDs.

The Machine is a future computer architecture that is also practical, said Patrick Moorhead, principal analyst at Moor Insights and Strategy.

“The fact that they can do this and run programs on it, it’s absolutely amazing,” Moorhead said. The Machine runs a version of Linux.

The Machine stands somewhere between the computers of today and future systems like quantum computers. But it’s still three to five years away from being ready for practical implementation in data centers, Moorhead said.

Source: HPE reveals a prototype of futuristic The Machine, but without cutting-edge memristors | PCWorld

Fujitsu, 1QBit Collaborate on Quantum-Inspired AI Cloud Service

TOKYO and VANCOUVER, May 16, 2017 — Fujitsu Limited and 1QB Information Technologies Inc. announced that starting today they will collaborate on applying quantum-inspired technology to the field of artificial intelligence (AI), focusing on the areas of combinatorial optimization and machine learning. The companies will work together in both the Japanese and global markets to develop applications which address industry problems using AI developed for use with quantum computers.

This collaboration will enable software developed by 1QBit for quantum computers to run on a “digital annealer,” jointly developed by Fujitsu Laboratories Ltd. and the University of Toronto. A digital annealer is a computing architecture that can rapidly solve combinatorial optimization problems using existing semiconductor technology.

Over the last four years, 1QBit has developed new methods for machine learning, sampling, and optimization based on reformulating problems to meet the unique requirements of interfacing with quantum computers. The company’s research and software development teams have focused on solving sampling, optimization, and machine learning problems to improve applications in industries including finance, energy, advanced materials, and the life sciences. The combination of Fujitsu’s cutting-edge computer architecture and hardware technology, and 1QBit’s software technology, will enable advances in machine learning to solve complicated, large-scale optimization problems.

Fujitsu has systematized the technology and its experience with AI under the name of Zinrai, which has developed over the course of more than thirty years. The platform will support customers in using AI and will be available as the Fujitsu Cloud Service K5 Zinrai Platform Service. Fujitsu will offer the results of this collaboration as an option in the Fujitsu Cloud Service K5 Zinrai Platform Service Zinrai Deep Learning, a Zinrai cloud service, during 2017.

In the future, the two companies will provide a variety of services that combine 1QBit’s software and expertise in building applications which benefit from the capabilities of quantum computers, with Fujitsu’s hardware technology, its customer base – the largest in Japan – and its versatile ICT capabilities, including AI. The partnership aims to contribute to the creation of new businesses and the transformation of existing businesses by introducing new solutions to the computational challenges facing customers in a variety of fields, including finance, life sciences, energy, retail and distribution.

About Fujitsu

Fujitsu is the leading Japanese information and communication technology (ICT) company offering a full range of technology products, solutions and services. Approximately 155,000 Fujitsu people support customers in more than 100 countries. We use our experience and the power of ICT to shape the future of society with our customers. Fujitsu Limited (TSE: 6702) reported consolidated revenues of 4.5 trillion yen (US$40 billion) for the fiscal year ended March 31, 2017. For more information, please see http://www.fujitsu.com.

About 1QBit

1QBit is dedicated to building quantum and quantum-inspired software to solve the world’s most demanding computational challenges. The company’s hardware-agnostic platforms and services enable the development of applications which scale alongside advances in both classical and quantum computers. 1QBit partners with Fortune 500 clients and leading hardware providers to redefine intractable industry problems in the areas of optimization, simulation, and machine learning. Headquartered in Vancouver, Canada, 1QBit’s interdisciplinary team of 50 comprises mathematicians, physicists, chemists, software developers, and quantum computing experts who develop novel solutions to problems, from research through to commercial application development. For more information, visit: 1qbit.com.

Source: Fujitsu, 1QBit Collaborate on Quantum-Inspired AI Cloud Service

How Do You Code for a Quantum Computer? | Inverse

Ever since the invention of the computer, coders have been working in the medium of 1’s and 0’s, creating the modern world by directly or indirectly manipulating binary computing bits that can be in one of two possible states. Now, real quantum computers are looming on the horizon with promises of speeds anywhere from thousands to hundreds of millions of times faster than classical computers, and so that binary state of affairs may be about to change. Yet how will mere mortal coders manage to take advantage of all that potential speed, when programming for quantum computers has been described as “a task as baffling as quantum mechanics itself”?

According to Simon Devitt, a quantum computing researcher at Japan’s Riken Center for Emergent Matter Science, the answer is: with lots and lots of help. Quantum coding may be among the most complex tasks ever undertaken, and so coders are starting to pool their resources and create the tools they need to progress the field as a whole.

On the most basic hardware level, quantum computers differ from classical computers because they are not binary — rather than working with bits that are in one of two states, quantum processors work with “qubits” that are in both of two states simultaneously. A true quantum computer would keep all of its qubits in a “superposition of states,” meaning that the entire array would literally be in all possible combinations of states at once. Speaking in 2013 about his company’s work in quantum computers, D-Wave Systems’ Geordie Rose famously made the true insanity of this idea clear by describing his machine’s calculations as being coordinated across the multiverse.

Yet most classical software engineers aren’t writing their programs in the raw bits of binary machine code, so why would quantum programmers write directly in the medium of qubits? Companies all over the world are developing tools to allow coders to step right over the most mind-boggling aspects of quantum physics. Microsoft is developing its LIQUi|> initiative (pronounced “liquid”), in part to translate appropriate algorithms from well-known coding languages to quantum machine code for execution on a quantum computer.

And just what constitutes an “appropriate algorithm” for execution across the multiverse? It’s a little bit complicated.

First, regardless of the coding language used, all math executed on a quantum computer must be “reversible.” What this means is that all outputs have to contain enough information that the entire process of creating them can be run backwards to regenerate the input. So right off, 1 + 1 = 2 is out, since the raw number 2 cannot on its own be used to derive the two 1s that went into it. Luckily there is a reversible version of every possible computable function, including 1 + 1 — but they’re much more complex to actually use. Translating complicated regular programs into a reversible form is possible, but often makes the code so inefficient that it’s not worth the effort.

Another problem has to do with decoherence, or the fact that the superposition that is critical to quantum computing is only possible when the computer itself is completely, totally cut off from the outside world. LIQUi|>’s Dave Wecker wrote in an email to Inverse that this means even just extracting an output from a quantum computer can become a problem. “When you ask for an answer, you tend to have to start all over to get another one,” he writes. “This makes you choose carefully what you really want.”

Quantum computers are thus “black boxes” in which even the programmers themselves can’t observe the path their data takes from input to output, making bug-checking extremely labor-intensive. Jorge Cham of PHD Comics coined the term “quanfidential” to describe this regime of quantum secrecy.

Perhaps all this difficulty could be surmounted with mass testing and error correction, but there’s an even bigger problem that makes any such brute force solution impossible: the absence of truly robust quantum computers on which to test the code itself. Though D-Wave Systems claims to be selling the world’s first commercial quantum computers, these machines are limited in terms of which quantum algorithms they can execute and, in any case, newly affordable models still come in at a whopping $15 million.

Devitt says this is one of the biggest difficulties with the modern development process for quantum code. “How does a classical software developer or coder actually construct quantum code?” he wrote in an email to Inverse. “The classical community certainly doesn’t sit down with pen and paper to develop an algorithm, but with quantum algorithms that is still the norm.” Back of the napkin calculations may be romantic, but they’ll never change the world if they can’t be tested.

Right now, the absolute cutting edge generalized quantum computers (those that theoretically should not have the limitations of D-Wave’s 2000 qubit devices) come from Microsoft, IBM and soon Google, but even these are mostly useful as proofs of concept since they don’t incorporate enough qubits to have hit the steep part of the exponential curve of quantum computing power. Though they are almost certainly the prototypes from which world-changing quantum computers will be born, they are not themselves such revolutionary machines, just yet.

IBM is making its prototype quantum computer available over the cloud, so it can be used to start testing quantum code, though the limited number of qubits means that it’s still too slow for useful for more than computing research. To supplement this, teams from around the world, most notably at LIQUi|> itself, are learning how to model quantum computation on normal computer hardware. These quantum simulators don’t provide the incalculable speed benefits of quantum hardware, but in exchange for that they can do their work without requiring real entanglement of quantum particles. As a result, not only do these simulators exist for active development right now, but they allow observation of their own processes, meaning they are not black boxes. In principle, the workflow would be to get a quantum algorithm working in the transparent but slow-motion context of a simulated quantum environment, and only then port it over to the opaque but much speedier context of an actual quantum computer.

Unfortunately, every additional simulated qubit doubles the requirement for the simulating computer’s RAM; simulating 30 qubits in superposition takes around 32 GB of memory, while 31 qubits takes around 64 GB. By the time you hit 45 simulated qubits, the classical computer already needs well over a petabyte of memory to keep everything straight. Larger quantum computers than that will be needed to do useful calculation, meaning that right now the most potentially important pieces of quantum code can’t be tested in either a real or simulated quantum environment.

Devitt believes that progressing these tools for facilitating quantum coding will be absolutely essential to realizing that code’s potential — even before the computers themselves hit the wider market. “The most astonishing and impactful quantum algorithms and programs,” he writes, “will not be discovered or invented by physicists or quantum computing engineers.”

In that way, at least, the development of quantum programming may look a little like the classical code that came before it.

Source: How Do You Code for a Quantum Computer? | Inverse

Microsoft Ups The Ante In Enterprise IoT and Edge Computing

Microsoft has added new capabilities to its Internet of Things platform making it one of the first to support emerging industrial use cases such as Edge Analytics and device management.

Azure IoT Suite

Source: Microsoft

Azure IoT Suite

Microsoft is one of the few companies to deliver end-to-end IoT capabilities covering the entire spectrum of solutions. From Windows 10 IoT Core to Power BI, the company offers a comprehensive set of IoT services to enterprises. Customers with experienced technical teams can build highly customized solutions by connecting the dots from the Azure services portfolio. They can use IoT Hub, Event Hub, Stream Analytics, Azure ML, Azure Functions, DocumentDB, and Power BI to build powerful IoT applications. To ease the creation of IoT solutions, Microsoft invested in a set of open source blueprints that are delivered through Azure IoT Suite, which is a PaaS service. Enterprise customers can pick an existing template targeting remote monitoring or predictive maintenance scenarios, and customize them for their needs. This reduces the cost and effort involved in developing a cloud-based IoT solution from the ground up. Behind the scenes, Microsoft Azure IoT Suite provisions all the required resources and maintains them on behalf of the customer. Expanding the available blueprints, Microsoft has added a new remote factory scenario to Azure IoT Suite, a preconfigured solution to onboard the assets in multiple facilities.

To simplify connecting devices to the cloud, Microsoft has gone a step forward to offer an end-to-end IoT SaaS platform built on top of Azure. Branded as Microsoft IoT Central, the service hides away the complexity of connecting and managing devices to the cloud. It is a fully managed SaaS offering for customers and partners that enable compelling IoT scenarios without requiring cloud solution expertise. Microsoft IoT Central aims to make it easy for onboarding devices and to manage them in the cloud through a streamlined user experience and consolidated billing. Customers looking for deeper customization can use either Azure IoT Suite or consume an array of IoT services available in Azure. Microsoft is one of the few companies to offer core services, PaaS, and SaaS for IoT.

ADVERTISING
Azure IoT Platform

Source: Janakiram MSV

Azure IoT Platform

Edge computing attempts to reduce the latency involved in the round trip to the cloud. It mimics the public cloud capabilities by delivering a subset of the cloud functionality. Data ingestion, complex event processing, machine learning, message routing, storage, and analytics are some of the key services exposed by the edge layer.

Source: Microsoft Ups The Ante In Enterprise IoT and Edge Computing

Microsoft introduces Azure IoT Edge for connected industrial applications – GeekWire

Microsoft used one of its biggest events of the year to remind developers that edge computing is coming, and it’s going to change everything all over again.

“Data has gravity, and computational power will move toward it,” said CEO Satya Nadella Wednesday at Microsoft Build 2017. He also mentioned this shift during Microsoft’s most recent earnings call:

For example, right when everyone’s talking about the cloud, the most interesting part is the edge of the cloud. Whether it’s IoT, whether it’s the auto industry, whether it’s what’s happening in retail, essentially compute is going where the data gets generated, and increasingly data is getting generated at the volumes in which it’s drawing compute to it, which is the edge.

As the Internet of Things hits its stride, and mobile devices continue to become more powerful, it’s making more and more sense to move processing out to those devices. During its first keynote presentation of Build 2017, Microsoft showed off a preview of Azure IoT Edge, a new service within Azure that will help developers tap into edge computing when it is released later this year.

Edge computing, or what Microsoft calls “the intelligent edge,” is the result of an ebb and flow in computing trends. Before the internet, your computer processed everything locally. As smartphones became the engine of tech, relatively weak mobile processors and power consumption concerns — coupled with the growth of cloud services — shifted a lot of that processing to data centers, or “the cloud.”

The pendulum is swinging back the other way. As we connect everything to the internet (often under less-than-ideal conditions), and mobile processors start to rival their PC counterparts, performance concerns start to argue for moving processing power closer to the end user. The speed of light still dictates an awful lot of what happens in the tech world, and any app or service promising real-time information can’t wait even seconds.

This is going to change the way applications are developed. The tech industry has been talking about this concept for several years, but no one is exactly sure how it will play out with the arrival of autonomous cars, the industrial internet, or wearable computers.

But the tools are arriving. Azure IoT Edge will allow developers working on Azure to run some of the same code they currently have to run in the cloud on the device itself. It’s a runtime that works on either Windows or Linux and only needs the processing power of a small computer like the Raspberry Pi, and is based around serverless computing principles.

Sam George, director of Azure IT, demonstrated on Wednesday how Sandvik Coromant, a Swedish company that builds expensive precision metal-cutting machines, used Azure IoT to reduce the amount of time required to detect a problem that would require the machine to be shut down before failing completely.

The idea is simple: it can take two seconds for the machine to detect a problem, report back to the cloud-hosted logic that determines whether or not the machine needs to be shut down, and convey the decision back to the machine. With some of these complicated machines, that delay might cause a catastrophic failure that could cost Sandvik millions.

Azure IoT Edge allows that decision-making logic to run on the machine itself, reducing the time needed to make an emergency shutdown decision from two seconds to 100 milliseconds. The stakes aren’t that high for most applications, but that level of performance could allow connected devices to manage themselves more intelligently without having to phone home for instructions.

This is one of those interesting areas in tech: we don’t know exactly what people will be able to accomplish with edge computing, but we know that it makes an awful lot of sense. Amazon Web Services has a similar feature called Amazon Greengrass that was launched last November as a limited preview.

“We’re reaching the point where you can’t train and run all the data in the cloud,” Nadella said during the keynote. It’s very likely that some Azure customers have already reached that point, and Azure IoT Edge could be a good selling point for those customers.

Source: Microsoft introduces Azure IoT Edge for connected industrial applications – GeekWire