According to a recent European Commission study, the market value of the IoT in the EU is expected to exceed one trillion euros in 2020. How can global telcos ensure they get a good slice of the IoT cake and are not becoming pure bandwidth providers? Patrick Steiner, Associate Manager, Specialist Solution Architecture at Red Hat believes the answer lies in supporting Internet of Things scenarios with Mobile Edge Computing (MEC). Here, we ask him how MEC is impacting the IoT market and for his views on the future of the telco ecosystem.Q: In a nutshell, what is Mobile Edge Computing (MEC)?Mobile edge computing (MEC) is a highly distributed computing architecture based on the deployment of small computing units in both outdoor and indoor radio access network (RAN) facilities at the edge of the telcos’ networks. Q: What is the overall goal of using MEC?The goal is to provide on-premises computing using standard x86 architecture rather than custom hardware that is able to run isolated from the core network. This will achieve a number of things: It will increase network agility, encourage innovation at the edge of networks, and also reduce capex and opex associated with spinning up new services.Q: How do you define MEC use-cases, and how does it relate to IoT?As a member of the European Telecommunications Standards Institute (ETSI), Red Hat is helping define MEC use cases and architecture details. We’re collaborating with the wider community to ensure common open standards prevail. Because MEC is isolated from the main network it is ideal for uses cases such as video analytics, location services, augmented reality, data caching, optimised local content distribution and, of course, IoT.Q: Why should telcos choose MEC ahead of other architectures to provide new IoT driven service opportunities?By using standard x86 architecture, along with a virtualised software stack, it is possible to decouple the software from the hardware. This speeds up time to deployment of new services, as only new software needs to be installed rather than new hardware. Telcos can also exploit economies of scale by consolidating many IoT applications onto the same industry-standard high-volume servers, switches, routers, and storage, transforming these environments into elastic, pooled resources that can scale up or down as needed. And, as you will see later, it also helps improve security.Q: How can MEC help differentiate telco’s solutions from over-the-top (OTT) players? Can MEC even effectively host OTT platforms?I’ll take the second part of the question first. In a word “yes” MEC can host OTT platforms. The key element here is the MEC IT application server, which is integrated at the RAN element. The MEC server platform consists of a hosting infrastructure (equivalent to the NFV infrastructure) and an application platform. In some ways, the MEC Application Platform is similar to an IoT gateway or a more classical IT Application Platform (like Java2EE).It terms of helping telcos to differentiate their services, the key here is the wealth of data that service providers can leverage. With proximity services enabled and location awareness to capture and analyse key information (like geo-location and trajectory) from the user equipment, and thanks to very low-latency access to the radio channels and detailed network context information such as radio conditions and neighbours statistics the telcos will be in a position to offer data rich services that are beyond the scope of OTT offerings. Furthermore, the telcos’ solutions will be more efficient and cost effective.Q: Where do you deploy MEC servers? The first release of the MEC working group focuses on a scenario where the MEC server is deployed either at the LTE macro base station (eNB) site, or at the 3G radio network controller (RNC) site, or at a multi-technology (3G/LTE) cell aggregation site, although wireless LAN (WLAN) deployments will be soon included. This should enable a common architecture for the first trials of IoT, LTE-U (LTE in unlicensed spectrum, 5G, cloud RAN (C-RAN), virtual content delivery network (vCDN,) mobile video delivery, and other distributed network function virtualisation (NFV) use-cases.The MEC server platform offers a virtualisation manager (an infrastructure as a service or IaaS abstraction), and advanced services such as traffic offload function (TOF), Radio Network Information Services (RNIS), communication services and a service registry. This layer abstracts the details of the radio network elements, so the MEC applications are portable and compatible across the network thanks to the standard, open APIs that will be defined. In other words, MEC will define a platform-as-a-service (PaaS) equivalent, hosted in an NFVi IaaS such as OpenStack, with Openshift applications on top. These components and functional elements that are key enablers for MEC solutions in a multi-vendor environment. As enablers, they will stimulate innovation and facilitate global mar
Mobile Edge Computing (MEC) enables the edge of the network to run in an isolated environment from the rest of the network and creates access to local resources and data. It is the part of edge computing and is highly applicable to many scenarios including M2M, network security, Big Data Analytics, and many business-specific applications. MEC brings virtualized applications much closer to mobile users ensuring network flexibility, economy and scalability for improved user experience.This research evaluates MEC technology, architecture and building clocks, ecosystem, market drivers, applications, solutions, and deployment challenges. The report also analyzes MEC industry initiatives, leading companies, and solutions. The report includes a market assessment and forecast for MEC users and MEC revenue globally, regionally, and within the enterprise market for years 2016 to 2021.Forecasts include MEC infrastructure (equipment, platforms, software, APIs, and services). All purchases of Mind Commerce reports includes time with an expert analyst who will help you link key findings in the report to the business issues you’re addressing. This needs to be used within three months of purchasing the report.
Here’s a prediction you don’t hear very often: The cloud computing market, as we know it, will be obsolete in a matter of years.The provocateur is Peter Levine, a general partner at venture capital firm Andreessen Horowitz. He believes that the increased computing power of intelligent Internet of Things devices, combined with ever-increasingly accurate machine learning technologies, will largely replace the infrastructure as a service public cloud market.+MORE AT NETWORK WORLD: 10 IaaS trends to watch in 2017 | 5 Enterprise Tech Trends that will shake things up in 2017 +The cloud as its know today is a “very centralized model” of computing, Levine says. Information is sent to the cloud where it is processed and stored. Many applications live in the cloud and whole data centers are being migrated to it.Levine says we’re already starting to see signs of Internet of Things devices replacing some of the computing power of the cloud. There are smart cars, drones, robots, appliances and machines. Each of these devices collects data in real time. “(Given) the latency of the network and the amount of information, in many of these systems there isn’t time for that information to go back to the central cloud to get processed,” he says. So, he argues, the edge of the network will be forced to become more sophisticated. He adds: “This shift is going to obviate cloud computing as we know it.”Take a self-driving car as an example. It needs to be able to identify a stop sign or a pedestrian and act on that information instantaneously. It can’t wait for a network connection to the cloud to tell it what to do.This new world will not eliminate the need for a centralized cloud, Levine says. The cloud will still be where information is offloaded to, where it is stored for long periods of time and where machine-learning algorithms get access to the vast troves of data they need to become ever smarter.The idea of edge computing becoming more powerful is not original. Cisco is credited with coining the term Fog Computing, which is the idea of analyzing and acting on time-sensitive data at the network edge.All this is a ‘back-to-the-future’ moment, Levine notes. Computing began with a centralized model focused on the mainframe. Then came the distributed client-server world. The cloud has swung computing back to a centralized platform. This dawn of edge intelligence will once again swing the world back to distributed system.
As more and more enterprise buyers continue to invest in scalable Internet-based applications, the need for NoSQL databases, which cater to complex data models and web-based architectures, becomes increasingly important. It’s a market we have been following closely at diginomica, with the leading vendors fighting it out for market share.Basho is one of these vendors, with its Riak database attracting the likes of Uber, the NHS and Bet365 as customers.We got the chance to speak to Basho CEO Adam Wray just before the Christmas break about his priorities for 2017, where he is hoping to make Riak the database of choice for buyers that are investing in IoT deployments and are looking to push more of their data management out to the edge.Coupled with this, investments in functionality that will attract developers to the platform are also central to the strategy. Basho has traditionally focused on large enterprise customers that needed stability from their distributed systems, but hasn’t got as much of a developer following as say, MongoDB.In our recent interview with MongoDB CEO Dev Ittycheria, for example, he outlined how Mongo has focused on making the lives of developers easier and thus claimed to have ‘won developer’s hearts away from Oracle’.Basho’s recent open source release of its Time Series database should go some way to helping compete with MongoDB on this front, as well as attract buyers interested in IoT, according to Wray. He said:Our big bet, which we made available on open source in the summer, was our time series offering. If you think of Riak, we’ve traditionally played in the key value. So we are in large scale clients that value stability and resiliency, in a distributed environment above all things. NHS medical records, Uber with their dispatch process etc.That’s all built off Riak key value, based on Riak Core, which is really a distributed routing engine that we can do about anything with. So we built a time series, we beta tested it, and we let it loose in open source in the summer.Our approach in 2016 was to focus on core scale and performance before features. So we knew that this would have a detrimental effect on incremental initial use. But keep in mind the roots of our company are that we appeal to the high-scale enterprise first, versus front-line developers. So with that in mind, by bringing a purpose built time series database, we are in a position to deliver performance and scale better than anything in the marketplace.Wray provided an example of a company that Basho has recently worked with, which he didn’t want me to name, but is a large enterprise, where he said that the Riak Time Series offering was able to offer 10x performance on a sixth of the footprint compared to that of competitor database Cassandra. Wray said:More efficient operational footprint from a CPU/RAM perspective, coupled up with performance, was the first real big thing we wanted to nail in 2016. And we did it in spades.Now you are going to see a real push walking into 2017 for feature functionality, so that we can appeal to frontline developers. What we are really after is owning the IoT centric portion of the time series marketplace, versus metrics and other areas. If we can nail the IoT, it opens up a lot of partnering opportunities, particularly in one major trend which we are keeping an eye on and are moving towards – edge computing.IoT and the edgeWe are beginning to see a lot of vendors, and some buyers, talking about the growing importance of edge computing in a world where all devices are connected – as there is often a need to place processing and analytics close to the device e.g. machinery, smart cars etc. This model differs from the solely cloud computing world where everything is centralised, but in reality it is likely to be bi-modal and a mixture of the two approaches.This is where Basho believes it has a sweet spot, as it can cater to both Time Series data, commonly produced in IoT deployments, as well as more static key value stores.Wray said:We see the opportunity in edge computing in Time Series as the next big wave. Our clients are not looking only looking at Riak Time Series, but Riak Key Value to help address that. So with Time Series you keep all the IoT data, which is linear in nature and sequential, then on the flip side you need to keep your state of device, profile status etc, which is much more appropriate in a Key Value engine.We want to break out with a point of view that is not just that we are good for the top 100 to 200 most large scale enterprises, but more importantly, that if you’re thinking IoT, that Basho is a central part of that. And if you’re thinking about the challenges of distributed data at scale and edge computing, then we are definitely going to be within the centre of gravity of that discussion.We are in another cyclical cycle, where there is way too much compute that needs to happen in the field, if not right on the device itself. So how do we get the best of cl
Edge computing today may be most simplistically defined as putting an extra layer of computing in the network between your smartphone and a centralized data center in the cloud for some optimizing purpose.From this original, slightly narrow vision, there is still some ways to go before realizing full stack relocations (e.g., relocation of an entire web server) that will allow true service enablement just one hop away from the end user device. Obvious benefits of realizing this vision include reduced latency and backhaul capacity reduction, but I believe there will be more profound benefits to operators and service providers in terms of new business model enablement, too.Software-ization will be the key enablerUbiquitous software-ization of the overall networking stack is the key to real edge computing. Software-ization will enable the envisioned “full stack relocation” through virtualization technologies such as NFV and platforms such as OpenStack. More so, the inevitable software-ization of the underlying transport networks will bring about dramatic change in the networking fabric that can be leveraged for flexible and early termination of services near the end user.Specifically, the control and user plane separation enabled by software-defined networking (SDN) technologies provides the basis for novel services being implemented as simple software extensions on top of commodity SDN hardware. This world will decouple network service providers from vendors of equipment, very much like decoupling application providers today from those selling a smartphone or laptop.Opportunity emerging for operators and service providersThe software-ization of the networking fabric in combination with NFV-managed computing infrastructure also creates new opportunities for operators by virtualizing key parts of the full “network stack.” While today, managed computing racks for major service providers such as Google, Akamai or the BBC need physical installation in key locations of the operator network as so-called points of presence, the increasing flexibility of the networking fabric will present an opportunity to move towards a model where operators can provide computing-as-a-service capabilities for any service provider, not just the major ones (Netflix, HBO, etc.).Key to this is for operators to utilize their footprint in managed site infrastructures by installing and renting out computing capabilities based on common (NFV) platforms that run on COTS hardware near end users in, for example, eNodeB, Broadband Gateway or Customer Premise Equipment. With this capability, operators can expose an edge surrogacy service to anyone who provides HTTP-level services today, while meeting 5G KPI such as low latency or increased throughput. With that, any available and future internet service will effectively have been moved to as little as just one hop away.The challenges in realizing full stack relocationRelocating a full internet stack is challenging. First, IP connections need to be terminated flexibly and in places where current network architectures do not foresee such termination, such as within eNodeBs. This requires new approaches to routing, as well as mobility handling, since current anchor-based indirection approaches will not suffice in a world of highly mobile services and devices.But relocating an entire stack needs to move beyond just IP. Particularly in the mobile network world, this also requires a rethinking of the current bearer-based connectivity model, which effectively establishes tunnels from one mobile network component to others. This highly inflexible approach causes overheads and delays. Beyond such core connectivity improvements, current HTTP-based web services will need to redirect requests to the nearest service endpoint, possibly just one hop away or located in a nearby mini data center.Currently utilized DNS indirections do not meet the flexibility or the timing requirements required for real edge computing, where decisions on selecting service endpoints might be based on near real-time criteria such as network or server load. Solutions to those challenges currently being incubated in research and standardization communities must eventually integrate with the SDN-enabled transport networks that will ultimately proliferate in 5G mobile and fixed networks, while not overburdening the control and management for these solutions. This in itself calls for more innovation in self-management of networks, particularly at the edge.A roadmap to the real edge: 3 stepsI see three steps as being key in the roadmap to realizing the real edge.First, the discussions in the relevant forums need to evolve towards the full stack relocation vision that I outlined. We need to move beyond the small-step evolution that is embedded into today’s network infrastructure vision where the access merely assists an end-to-end service from the user’s device to a remote data center.Instead, the access network has to become a full member