Schneider Electric appoints new VP for Ireland in edge computing drive – Data Economy

Company believes Ireland could soon be as desirable as other rival data centre destinations such as London and Amsterdam.

Schneider Electric has appointed Ivan Habovcik as Vice President for its IT Division in Ireland.

Habovcik is therefore replacing Vincent Barro, who has moved to a new role managing the IT Division in Switzerland.

Habovcik has served Schneider Electric as business development for its Single-Phase UPS business in Central Eastern Europe and Israel, covering a region of fifteen countries.

He has also held roles as Country Manager and Vice President, IT Business for the Czech Republic, Poland, Romania, Slovakia and Vietnam.

Habovcik said: “Ireland presents a perfect opportunity to showcase our industry-leading edge computing offers.

“I am proud to be moving to a new European country that has seen such significant development in both the technology and data centre sectors, and I am looking forward to working as part of the Ireland team to drive the business into a new period of accelerated growth.

“Ireland is a thriving hub of data centre activity. I believe there is a large requirement for edge computing services to support our growing number of colocation customers.

“I also believe that within time, the country could rival other destinations and become as desirable as Amsterdam or London.”

Source: Schneider Electric appoints new VP for Ireland in edge computing drive – Data Economy

New Linux Open Source Group Focuses on IoT and Edge Computing

Today, the Linux Foundation launched yet another open source group — this one relating to the Internet of Things (IoT) and edge computing. The new group is EdgeX Foundry, and its goal is to standardize industrial IoT edge computing.

According to the Linux Foundation, IoT efforts are fragmented, and they need a common IoT framework. In addition, the sheer quantity of data that will be transmitted from IoT devices is driving adoption of edge computing, where connected devices and sensors transmit data to a local gateway device instead of sending it back to the cloud or a central data center.

“There were a group of companies that had come to us with a repeated problem; they had devices with different protocols, and they wanted them to interoperate,” said Philip DesAutels, senior director of IoT at The Linux Foundation.

Last autumn, the Linux Foundation began working on a project it called IoTX to address these issues. It also began working with Dell, which had created its own FUSE software for IoT. The new EdgeX Foundry open source project combines the work done by these two groups. Dell is seeding EdgeX Foundry with its FUSE source code base under Apache 2.0. The contribution consists of more than a dozen microservices and more than 125,000 lines of code.

Nearly 50 companies, including Dell, Cumulocity, and VMware, have joined EdgeX Foundry as initial members. Their software and products comprise a marketplace, offering interoperable IoT components that can run on any hardware or operating system and with any combination of application environments.

Interoperability between community-developed software will be maintained through a certification program.

The Linux Foundation will establish a governance and membership structure for EdgeX Foundry. A governing board will guide business decisions and marketing and ensure alignment between the technical communities and members. The technical steering committee will provide leadership on the code and guide the technical direction of the project.

The IoT Ecosystem

It seems like every day we learn of a new IoT platform that’s been created by big companies such as GE Digital (Predix); Verizon (ThingSpace); and Cisco (Jasper).

Asked how EdgeX Foundry will relate to all these IoT platforms, DesAutels said, “Most of these platforms presume something is sending data to them. Where they lack is in the last mile to the device. There’s a gap in the infrastructure. EdgeX Foundry is a great way to feed data to one of these platforms.”

Besides not competing with vendors’ IoT platforms, EdgeX Foundry does not plan to create a new standard. It aims to unify existing standards and edge applications. It is collaborating with relevant open source projects, standards groups, and industry alliances to ensure consistency and interoperability. Groups it named in today’s announcement include:

ETSI’s Multi-Access Edge Computing (MEC) group was not named in today’s announcement. But DesAutels said there are a lot of IoT and edge computing groups, which EdgeX Foundry is in the process of reaching out to.

Source: New Linux Open Source Group Focuses on IoT and Edge Computing

Aparna Systems ‘Cloud-in-a-Box’ Targets Edge Computing

Cloud infrastructure startup Aparna Systems today launched an open-software “cloud-in-a-box.”

Aparna’s Orca cloud and server technology is an “ultra-converged” compute, storage, and network solution, not to be confused with a hyperconverged solution, according to the company. The latter relies on an external top-of-rack switch to create server clusters, while ultra-convergence goes beyond hyperconvergence by integrating the network switching.

The CPU core density is note-worthy, too. Aparna Systems claims it can support up to 10,000 cores per rack, and consumes less than 75 watts per server.

Sam Mathan founded the Fremont, California-based company that formally launched today. He’s the former CEO of Matisse Networks and Amber Networks.

The company raised $500,000 from Divergent Venture Partners; the rest of its funding comes from Mathan and other Silicon Valley execs: former Cirrus Logic CFO Sam Srinivasan, Brocade founder Kumar Malavalli, and Clearstone Partners’ Vish Mishra.

In an interview with SDxCentral, Mathan said data-intensive applications are driving the need for increased storage and compute. But only the “Super 7” cloud computing companies including Google, Facebook, Microsoft, and Amazon can afford to build the data centers necessary to deploy more storage, compute, and networking.

“The rest of the industry, whether the enterprise or service providers, have to figure out how to use that same technology, same distributed application infrastructure, in a much more cost-effective fashion,” Mathan said. “One of the things we also see in the market place is a huge amount of applications infrastructure growth that requires more low-latency access at the edge.”

Aparna Systems built its open software cloud in a box to address both. It’s targeting service provider and enterprise customers, and is well-suited for both edge computing and central data centers with limited space and power, he said.

“Aparna’s Cloud-in-a-Box has the potential to be a real game-changer in a variety of applications,” said Michael Howard, senior research director and advisor for Carrier Networks at IHS Markit in a statement. “This is particularly true at the edge of the network, including in central offices, where carriers have struggled to find a practical and affordable way to deploy adequate compute and storage resources. The system’s high density and design innovations combine to also drastically improve scalability and energy efficiency compared to blade servers.”

Source: Aparna Systems ‘Cloud-in-a-Box’ Targets Edge Computing

Is edge computing set to blow away the cloud? – Cloud Tech News

Just about every new piece of technology is considered disruptive to the extent that they are expected to replace older technologies. Sometimes as with the cloud, old technology is simply re-branded to make it more appealing to customers and thereby to create the illusion of a new market. Let’s remember that cloud computing had previously existed in one shape or form. At one stage it was called on-demand computing, and then it became ‘application service provision’.

Now there is edge computing, which some people are also calling fog computing and which some industry commentators feel is going to replace the cloud as an entity. Yet the question has to be: Will it really? The same viewpoint was given when television was invented. Its invention was meant to be the death of radio. Yet people still tune into radio stations by their thousands each and every day of every year.

Of course, there are some technologies that are really disruptive in that they change people’s habits and their way of thinking. Once people enjoyed listening to Sony Walkmans, but today most folk listen to their favourite tunes using smartphones – thanks to iPods and the launch of the first iPhone by Steve Jobs in 2007, which put the internet in our pockets and more besides.

Levine’s prophecy

So why do people think edge computing will blow away the cloud? This claim is made in many online articles. Clint Boulton, for example, writes about it in his Asia Cloud Forum article, ‘Edge Computing Will Blow Away The Cloud’, in March this year. He cites venture capitalist Andrew Levine, a general partner at Andreessen Horowitz, who believes that more computational and data processing resources will move towards “edge devices” – such as driverless cars and drones – which make up at least part of the Internet of Things. Levine prophesises that this will mean the end of the cloud as data processing will move back towards the edge of the network.

In other words, the trend has been up to now to centralise computing within the data centre, while in the past it was often decentralised or localised nearer to the point of use. Levine sees driverless cars as being a data centre; they have more than 200 CPUs working to enable them to operate without going off the road and causing an accident. The nature of autonomous vehicles means that their computing capabilities must be self-contained, and to ensure safety they minimise any reliance they might otherwise have on the cloud. Yet they don’t dispense with it.

Complementary models

The two approaches may in fact end up complementing each other. Part of the argument for bringing data computation back to the edge falls down to increasing data volumes, which lead to ever more frustratingly slow networks. Latency is the culprit. Data is becoming ever larger. So there is going to be more data per transaction, more video and sensor data. Virtual and augmented reality are going to play an increasing part in its growth too. With this growth, latency will become more challenging than it was previously. Furthermore, while it might make sense to put data close to a device such as an autonomous vehicle to eliminate latency, a remote way of storing data via the cloud remains critical.

The cloud can still be used to deliver certain services too, such as media and entertainment. It can also be used to back up data and to share data emanating from a vehicle for analysis by a number of disparate stakeholders. From a data centre perspective, and moving beyond autonomous vehicles to a general operational business scenario, creating a number of smaller data centres or disaster recovery sites may reduce economies of scale and make operations more inefficient than efficient. Yes, latency might be mitigated, but the data may also be held within the same circles of disruption with disastrous consequences when disaster strikes; so for the sake of business continuity some data may still have to be stored or processed elsewhere, away from the edge of a network. In the case of autonomous vehicles, and because they must operate whether a network connection exists or not, it makes sense for certain types of computation and analysis to be completed by the vehicle itself. However, much of this data is still backed up via a cloud connection whenever it is available. So, edge and cloud computing are likely to follow more of a hybrid approach than a standalone one.

Edge to cloud

Saju Skaria, senior director at consulting firm TCS, offers several examples of where edge computing could prove advantageous in his LinkedIn Pulse article, ‘Edge Computing Vs. Cloud Computing: Where Does the Future Lie?’. He certainly doesn’t think that the cloud is going to blow away.

“Edge computing does not replace cloud computing…in reality, an analytical model or rules might be created in a cloud then pushed out to edge devices… and some [of these] are capable of doing analysis.” He then goes on to talk about fog computing, which involves data processing from the edge to a cloud. He is suggesting that people shouldn’t forget data warehousing too, because it is used for “the massive storage of data and slow analytical queries.”

Eating the cloud

In spite of this argument, Gartner’s Thomas Bittman, seems to agree that ‘Edge Will Eat The Cloud’. “Today, cloud computing is eating enterprise datacentres, as more and more workloads are born in the cloud, and some are transforming and moving to the cloud… but there’s another trend that will shift workloads, data, processing and business value significantly away from the cloud. The edge will eat the cloud… and this is perhaps as important as the cloud computing trend ever was.”

Later on in his blog, Bittman says: “The agility of cloud computing is great – but it simply isn’t enough. Massive centralisation, economies of scale, self-service and full automation get us most of the way there – but it doesn’t overcome physics – the weight of data, the speed of light. As people need to interact with their digitally-assisted realities in real-time, waiting on a data centre miles (or many miles) away isn’t going to work. Latency matters. I’m here right now and I’m gone in seconds. Put up the right advertising before I look away, point out the store that I’ve been looking for as I driver, let me know that a colleague is heading my way, help my self-driving car to avoid other cars through a busy intersection. And do it now.”

Data acceleration

He makes some valid points, but he falls into the argument that has often been used about latency and data centres: They have to be close together. The truth, however, is that wide area networks will always be the foundation stone of both edge and cloud computing. Secondly, Bittman clearly hasn’t come across data acceleration tools such as PORTrockIT and WANrockIT. While physics is certainly a limiting and challenging factor that will always be at play in networks of all kinds – including WANs, it is possible today to place your datacentres at a distance from each other without suffering an increase in data and network latency. Latency can be mitigated, and its impact can be significantly reduced no matter where the data processing occurs, and no matter where the data resides.

So let’s not see edge computing as a new solution. It is but one solution, and so is the cloud. Together the two technologies can support each other. One commentator says in response to a Quora question about the difference between edge computing and cloud computing that “edge computing is a method of accelerating and improving the performance of cloud computing for mobile users.” So the argument that edge will replace cloud computing is a very foggy one. Cloud computing may at one stage be re-named for marketing reasons – but it’s still here to stay.

Source: Is edge computing set to blow away the cloud? – Cloud Tech News

What’s in a name? Mobile Edge Computing turns into Multi-access Edge Computing, nobody hurt | TelecomTV

There had been a low-key, gentlepersonly tussle going on over what we used to call Mobile Edge Computing (MEC). Traditional ‘mobile’ edge advocates were keen on keeping the application anchored in the mobile space while other voices wanted a more holistic or heterogeneous approach which would include WiFi and other access technologies.

As the world moved swiftly towards a more software defined, cloud oriented world, with increasingly ambitious and demanding network applications driving the need for network investment, it became clear that a ‘heterogeneous’ edge computing capability would be necessary.  First, because WiFi was being used interchangeably with cellular. Also, because new uses like IoT would need edge computing as a filter to stop the core network being overwhelmed by data. Not forgetting the latency-critical applications such as those guiding remote surgery or autonomous cars. These ideally require sub-millisecond round trip delay and physics dictates that critical applications will have to operate at the edge of the network to achieve that.

Now the moment has come for the big change. As we wrote last year (see – Edge Computing prepares for a Multi-access future) the ETSI MEC industry standards group announced that “mobile edge computing” was to be known as “multi-access edge computing” from 2017, to better reflect the growing interest in MEC (or Fog Computing) from non-cellular players. Furthermore, far from being a 5G technology, MEC is ready for deployment now.

True to its word ETSI has just announced that it will be addressing the reality of current and future heterogeneous networks by officially changing its name from Mobile Edge Computing  Industry Specification Group, to Multi-access Edge Computing (MEC) Industry Specification Group. That change comes with new leadership team and a new scope that extends beyond its original focus on Mobile Edge Computing for mobile access networks.

At the 9th meeting of ETSI MEC, held on 13-17 March in Sophia Antipolis, France, Alex Reznik from Hewlett-Packard Enterprise was elected as the new chairman. Pekka Kuure and Sami Kekki, respectively from Nokia and Huawei, were elected as new vice chairs, while Adrian Neal, from Vodafone, was re-elected as vice chair.

The scope of the group has also expanded to address multiple MEC hosts being deployed in many different networks, owned by various operators and running edge applications in a collaborative manner.

ETSI says, “Future work will take into account heterogeneous networks using LTE, 5G, fixed and WiFi technologies. Additional features of the current work include developer friendly and standard APIs, standards based interfaces among multi-access hosts and an alignment with NFV architecture.

“The MEC system will provide a standardized and open system able to support different virtualization techniques, with the capability for an application to discover applications and services available on other hosts, and to direct requests and data to one or more hosts. These features among others will lead to a system offering standard APIs, management interfaces with orchestrator and virtualized infrastructure, standardized interfaces between MEC hosts for traffic as well as service routing or application relocation, and standardized interfaces with transport networks.

“In phase two, we are expanding our horizons and addressing challenges associated with the multiplicity of hosts and stakeholders. The goal is to enable a complete multi-access edge computing system able to address the wide range of use cases which require edge computing, including IoT,” says Alex Reznik, chairman of MEC ISG. “We will continue to work closely with 3GPP, ETSI NFV ISG and other SDOs as well as key industry organizations to ensure that edge computing applications can be developed to a standardized, broadly adopted platform.”

Source: What’s in a name? Mobile Edge Computing turns into Multi-access Edge Computing, nobody hurt | TelecomTV