Emerging serverless computing technologies, such as function-as-a-service (FaaS) offerings, enable developers to virtualize the internal logic of an application, simplifying management of cloud native applications and allowing cost savings through billing and scaling at the level of individual function calls. Serverless computing is therefore rapidly shifting the attention of software vendors to the problem of developing cloud applications that can use these platforms.
Rational decomposition and orchestration for serverless computing ( RADON ) aims at creating a DevOps framework to create and manage microservices-based applications that can optimally exploit serverless computing technologies. RADON applications will include fine-grained and independently deployable microservices that can efficiently exploit FaaS and container technologies. The end goal is to broaden the adoption of serverless computing technologies within the European software industry. The methodology will strive to tackle complexity, harmonize the abstraction and actuation of action-trigger rules, avoid FaaS lock-in, and optimize decomposition and reuse through model-based FaaS-enabled development and orchestration.
The project’s kickoff meeting was held in London during 17-18th
January, at Imperial College London, UK, which is the coordinator for
the project. The consortium project has 8 partners for several EU member
states.
Participant organization name
Country
Imperial College of Science, Medicine, and Technology
Internet of Things (IoT) represents a cyber-physical world where physical things are interconnected on the Web. This research proposes an architecture designed for Energy-efficient Inter-organizational wireless sensor data collection Framework (EnIF). Environmental monitoring and urban sensing are two major application scenarios in IoT. Different from the traditional sensor environments, environmental sensing in IoT may require battery-powered nodes to perform the sensing tasks. Such a requirement raises a critical challenge to ensure that sensor data gathering can be collected in a timely and energy-efficient manner. Although numerous energy-efficient approaches for IoT scenarios have been proposed, previous works assumed the entire network was managed by a single organization in which the network establishment and communication have been pre-configured. This assumption is inconsistent with the fact that IoT is established in a federated network with heterogeneous devices controlled by different organizations. The aim of the framework is to enable a dynamic inter-organizational collaborative topology towards saving energy from data transmissions using a service-oriented architecture.
Seng W. Loke, Pervasive Computing Laboratory, Department of Computer Science and Information Technology, La Trobe University, Australia.
Hai Dong,
Sensors, Clouds, and Services Laboratory (SCSLab), Department of
Computer Science and Information Technology, RMIT University, Australia.
Flora Salim, Department of Computer Science and Information Technology, RMIT University, Australia.
Sea Ling, Faculty of Information Technology, Monash University, Australia.
Publication
Chii Chang, Seng W. Loke, Hai Dong, Flora Salim, Satish N.
Srirama, Mohan Liyanage and Sea Ling. An Energy-Efficient
Inter-organizational Wireless Sensor Data Collection Framework. In
Proceedings of the 22nd IEEE International Conference on Web Services
(ICWS 2015). pp:639-646. June 27 – July 2, 2015, New York, USA.
The goal of this research is to overcome the challenges of cyber-physical systems in the Internet of Things. The challenges include: interoperability, autonomous machine-to-machine communication, automatic configuration, energy efficiency, trustworthiness etc.
Today, a common vision of Internet of Things (IoT)
represents a global network interconnected with various electronic
devices in a meaningful manner. The devices consist of Radio Frequency
Identification (RFID)-attached objects, EPCGlobal Network-connected
objects and a variety of internet-enabled objects such as mobile phones,
smart watches, vehicles, sensors, home appliance and so on. As
described by European Commission Information Society and Media (2009),
the fundamental idea of IoT is to allow “people and things to be connected Anytime, Anyplace, with Anything and Anyone, ideally using Any path/network and Any service“. The US National Intelligence Council has predicted that “by 2025 Internet nodes may reside in everyday things–food packages, furniture, paper documents, and more“.
IoT technologies can be applied in various scenarios. E.g., combining
the augmented reality technology with IoT, a system can help disabled
people to perform many daily activities such as travel, shopping,
operating appliances etc. In the enterprise domain, the EPCGlobal
Network system provides a convenient way for product transporting and
tracking processes. Other scenarios include the domains in
transportation, healthcare, smart environment, social networking and so
on.
Although various IoT projects are progressing, there are still many challenges remain unsolved.
Interoperability. Most existing IoT
solutions were built in isolation. Currently, the IoT environment still
lack a feasible solution to enable the interaction among these
heterogeneous fragments in a highly distributed environment. Assumption
such as centralised mediation services are inapplicable and unrealistic.
Autonomous Machine-to-Machine (M2M) communication.
Autonomous M2M communication enables the cyber-physical systems to
provide self-managed services to ease people’s daily life. However, such
systems face service discovery challenges because existing Web-based
service discovery mechanisms were not designed for resource constrained
devices, which are major entities in IoT environments.
Automatic Configuration. Most existing IoT
solutions were proposed for domain specific applications. They cannot be
directly applied to different scenarios. It is due to the lack of
automatic configuration mechanism. Furthermore, the autonomous system
also require self-management, self-healing and self-optimisation
mechanisms.
Energy Efficient Service Provisioning.
Resource constrained devices such as sensors, actuators and mobile
devices, which rely on battery power, are the major objects of IoT. In
order to interact with these devices in the autonomous M2M manner, they
need to provide some forms of networked services. However, the existing
SOA-based service provisioning approaches such as HTTP-based REST or
SOAP were fundamentally not designed for resource constrained IoT
devices, and are considered as heavyweight in terms of energy
consumption.
Trustworthiness. Classic security strategies
for Web services face the challenges in IoT because the limitations in
terms of: (1) hard to control the identification of IoT participants;
(2) centralised control is unrealistic; (3) heavy data transmission and
overhead. One trend of improving IoT security is to apply the
trust-based strategies. Trust strategies can be categorised into two
types: (1) service requester aspect; (2) service provider aspect.
Existing trust strategies for IoT were proposed only for the service
requester aspect. There is still lack of a proper trust solution for
preventing malicious service requesters from public accessible ‘Things’.
Current Projects
Middleware
Energy-efficient Things Framework
Mobile-hosted Things Middleware (MHTM)
Mobile-hosted Cloud Middleware (MHCM)
Mobile Resource Composition Mediation Framework (MRCMF)
Trust
Trustworthy Internet of Things
Application
Realtime Augmented Reality using Context-aware Cloud services with Mobile Hosts
Investigate, develop and validate an adaptive middleware-based management system that can provide a reliable, scalable and cost-efficient linkage to leverage IoT with Big Data cloud.
Theme 2: Adaptive Middleware for Reliable Cyber-Physical Systems
Supporting reliability and scalability in Cyber-Physical Systems
(CPS) for the Internet of Things (IoT) requires a large amount of
additional resource usage for encoding/decoding additional data and
signals; and additional wireless network communication to monitor and
manage the processes, which results in a high cost in terms of both
lifetime of the front line battery-powered machines (e.g. sensors,
robots) and the bandwidth cost of wireless network usage that is
required to enable the communication between the machines and the
backend controller systems. Although there exist a fair number of
software middleware frameworks for IoT, they did not fully address the
challenges in reliability, scalability and cost-efficiency together.
The goal of this project is to develop a reliable, scalable,
cost-efficient middleware that will hasten the development of IoT
applications for both academia and industry. Domain-specific scientists
can also benefit from the middleware platform for performing real-world
testing in the IoT environments without spending significant time on
setting up the experimental environment.
The proposed middleware consists of context-aware ubiquitous
computing schemes, and platform-independent process management schemes.
The system can clearly identify the reliability and efficiency of each
IoT task execution towards providing self-adaptation and
self-optimisation. The software components of the middleware will be
designed based on the plug-and-play concept in which the system can
dynamically modify and adjust its processes at runtime. A prototype will
be implemented and tested on a real-world IoT established by wireless
network-connected portable machines and cloud services. The machines are
able to collaborate autonomously (e.g. for data acquisition) under the
governance of the cloud systems.
Distributed Business Process Management System for IoT
Big Data in IoT represents a vast volume of data generated from the
IoT networks and organisations can make use of them, which is not
available before. Ideally, organisations will gain benefit from the Big
Data to improve and enhance their business process more efficiently and
more intelligently. In order to realise the vision, Business Process
Management System for IoT (BPMS4IoT) needs to address the challenge of
varying forms of data. The data of IoT comes in various formats from
different co-existing objects in IoT networks. The information system
can utilise machine-learning mechanism to identify the correlation
between the data from different objects and generate the meaningful
information. However, processing the various formats of data may not be a
swift task. For example, in order to identify a suspicious activity in
an outdoor environment, the system may integrate the video data and
temperature data from different sensors and then either utilise an
external third party cloud service for the analysis processes or invoke
the external database service to retrieve the related data and analyse
them in the intra-organisational information system. The challenge is if
such a need is on-demand, how does the system generate the result in
time because it involves the data transmission time in different
networks and the large volume of data processing. One promising solution
is to apply the distributed BPMS model in which the processes can be
handled at the self-organised edge network of IoT.
Publications
Chii Chang, Satish Narayana Srirama, and Rajkumar Buyya. ACM DL
Author-ize serviceMobile Cloud Business Process Management System for
the Internet of Things: A Survey. ACM Computing Surveys. Volume 49,
Issue 4, December 2016, Article No. 70. ACM Press, New York, USA. ISSN
0360-0300, DOI: 10.1145/3012000.
Jakob Mass, Chii Chang and Satish N. Srirama. WiseWare: A
Device-to-Device-based Business Process Management System for Industrial
Internet of Things. In Proceedings of the 9th IEEE International
Conference on Internet of Things (iThings 2016), Dec. 16-19, Chengdu,
China, 2016, pp. 269-275.
Fog and Edge Computing
Big data services rely on the Internet of Things (IoT) systems to
integrate heterogeneous devices and to deliver data to the central
server for analytics. However, the increasing number of connected
devices and the various networking issues (e.g. network latency,
unstable bandwidth, unstable connectivity etc.) will influence the
overall performance of the system. Especially when the big data service
needs to provide the timely responses. Therefore, the system needs
fog/edge computing (F/EC) architecture, which distributes certain
computational tasks to the devices at the edge network of IoT, where the
data source devices located at, towards enhancing the overall
performance.
Research questions:
How to effectively deploy and manage F/EC architecture in IoT for big data services?
How does F/EC architecture handle mobile objects/devices (e.g. vehicles, human, animals) in IoT systems?
In order to answer the first question, we have studied the most
recent works in F/EC domain. The outcome of this study includes one IEEE
Computer magazine article (published) [2] and two book chapters
(submitted) [5, 6].
To answer the second question, we have studied mobile fog
computing domain and have developed an adaptive framework together with a
corresponding algorithm to handle mobile objects/devices in F/EC. The
result has been published (or to be published) in the peer-reviewed
international conference [1], international journal [3] and book [4]
(please refer the publication list).
Publications
Chii Chang, Mohan Liyanage, Sander Soo, Satish Narayana Srirama. Fog Computing as a Resource-Aware Enhancement for Vicinal Mobile Mesh Social Networking. In Proceedings of the 31st IEEE International Conference on Advanced Information Networking and Applications (AINA-2017). 27-29 March 2017, Tamkang University, Taipei, Taiwan.
Chii Chang, Satish Narayana Srirama, and Rajkumar Buyya, Indie Fog: An Efficient Fog-Computing Infrastructure for the Internet of Things, IEEE Computer, Volume 50, Issue 9, Pages: 92-98, ISSN: 0018-9162, IEEE COMPUTER SOCIETY, USA, September 2017.
Sander Soo, Chii Chang, Seng W. Loke, Satish Srirama, (2017), Proactive Mobile Fog Computing using Work Stealing: Data Processing at the Edge, International Journal of Mobile Computing and Multimedia Communications (IJMCMC), Vol. 8, No. 4.
Sander Soo, Chii Chang, Seng W. Loke, Satish Srirama. Dynamic Fog Computing: Practical Processing at Mobile Edge Devices. In Algorithms, Methods, and Applications in Mobile Computing and Communications, edited by Agustinus Borgy Waluyo, IGI Global, 2018. (Accepted to be published).
Chii Chang, Amnir Hadachi, Satish Narayana Srirama and Mart Min. Mobile Big Data: Foundations, State-of-the-Art, and Future Directions. In Encyclopedia of Big Data Technologies, edited by Sherif Sakr and Albert Y. Zomaya. (Submitted).
Chii Chang, Satish Narayana Srirama and Rajkumar Buyya. Introduction to Fog and Edge Computing. In Fog and Edge Computing: Principles and Paradigms, edited by Rajkumar Buyya and Satish Narayana Srirama.
The recent improvements in mobile technologies are fostering the
emergence of ubiquitous environments in which the user is able to
access, create and share information at any location with considerable
ease. Moreover, mobile devices have become an essential part of
distributed architectures that can be used for monitoring the context
of the user (e.g. location, situation, etc.), and thus reacting in a
proactive way. On the other hand, smart phones are equipped with a
variety of sensors (GPS, magnetic field, etc.) that enrich the mobile
applications with location awareness and sensing capabilities. These
advances enable fitting contextual requirements for improving the
quality of service (QoS) in the applications as it allows adapting the
interaction between the handset and the user in real-time. For example,
a sensor such as the accelerometer is used for sensing how the user
is holding the handset, and thus changing the position of the screen
as a result.
The accelerometer is a sensing element that
measures the acceleration associated with the positioning of a weight
in which it has been embedded (e.g. a mobile device). Depending on the
number of axes, it can gather the acceleration information across
multiple dimensions. In the case of a triaxial accelerometer that is
the most common in mobiles from vendors such as HTC, Samsung, Nokia
etc., the acceleration can be sensed on three axes (x, y and z). The
axes are related with the movements forward/backward, left/right and
up/down, respectively. For example: in the case of a runner, up/down
is measured as the crouching when he/she is warming up before starting
to run, forward/backward is related with speeding up and slowing
down, and left/right involves making turns while he/she is running.
While the inclusion of notification services within the mobile
applications involves to establish and to maintain an specialized
infrastructure (e.g. application server, mobile clients) that relies on
mobile platform specifications (e.g. mobile registration, etc.) and
its related cloud vendor technology (e.g. authentication mechanisms,
communication protocols, etc.), existent running infrastructures at any
datacenter that are based on instant messaging technologies like XMPP
can be adapted, in order to work under a comparable rate of service
for delivering notifications to the smartphones, without suffering of
message limitations, mobile platform restrictions or cloud provider
constraints. In other words, extending technologies such as XMPP
facilitates the integration of cloud functionality within the mobile
applications and alliviates its development by avoiding the cloud
vendor lock-in syndrome.
DAMA is a generic distributed computing
application, which can be reconfigured to model the structure of
different algorithms. DAMA follows the single program, multiple data
(SPMD) parallelism model and has been implemented using four different
parallel programming solutions: Apache Spark, Hadoop MapReduce, Apache
Hama and MPIJava.
After configuring DAMA to model the
structure of a given algorithm, it can be used directly as an
approximate benchmark for this algorithm on any of the supported
frameworks or libraries. This process does not require any programming
or code debugging steps and can thus significantly simplify the process
of estimating the performance of the algorithm on different frameworks
as we can avoid implementing the given algorithm on each of the
available frameworks and using the implementations as benchmarks.
To be clear, all these steps are still
going to be required once the target distributed computing framework is
chosen. The goal of DAMA is simply to postpone these steps until it is
known which specific framework should give the best result and thus
greatly decrease the scope of work that has to be done. We feel that
this kind of approach is critical as the number of available distributed
computing frameworks continues to increase in the Hadoop ecosystem and
also outside of it.
Dynamic Algorithm Modeling Application
(DAMA) is a generic distributed computing application, which can be
reconfigured to model the structure of different algorithms. It follows
the single program, multiple data (SPMD) parallelism model and has been
implemented using four different parallel programming solutions: Apache
Spark, Hadoop MapReduce, Apache Hama and MPIJava. After configuring DAMA
to model the structure of a given algorithm, it can be used directly as
an approximate benchmark for this algorithm on any of the supported
frameworks or libraries. This process does not require any programming
or code debugging steps and can thus significantly simplify the process
of estimating the performance of the algorithm on different frameworks
as we can avoid implementing the given algorithm on each of the
available frameworks and using the implementations as benchmarks.
To be clear, all these steps are still
going to be required once the target distributed computing framework is
chosen. The goal of DAMA is simply to postpone these steps until it is
known which specific framework should give the best result and thus
greatly decrease the scope of work that has to be done. This kind of
approach is critical as the number of available distributed computing
frameworks continues to increase in the Hadoop ecosystem and also
outside of it.
The goal of the project is to acquire infrastructure and establishing the IoT and Smart Solutions Lab, with industry cooperation. The lab will facilitate the students in experimenting with the respective devices and design innovative projects in driving the Estonian startup culture forward.
A platform for Goods Monitoring in Industrial IoT. This platform enables long-running IoT monitoring tasks by using Business Process Management System software and mobile devices.
A
platform for Goods Monitoring in Industrial IoT. This platform enables
long-running IoT monitoring tasks by using Business Process Management
System software and mobile devices.
Internet of Things
(IoT)-enabled information system solutions are emerging in various
domains such as remote healthcare, smart logistics, agriculture and so
on. Business Process Management Systems (BPMS) have already proved
themselves to be promising tools for driving and managing devices within
IoT systems.
In this work, we propose a system design for decentralised
device-todevice (D2D)-based BP execution, where mobile nodes have the
capability of both executing BPs but also migrating BP execution to
other nodes during runtime. We apply this design to the field of smart
logistics, in order to enable smart goods monitoring. The presented
goods monitoring solution enables reacting to events as soon as they
occur, while also generating a trace of the monitoring execution
history. A prototype focusing on the migration functionality of the
platform has been implemented and tested to evaluate its performance in
the context of the mentioned smart logistics scenario.
loud computing has gained significant popularity
over past few years. Running application on the cloud can benefit from
many features of it such as seamless maintenance and high resource
availability. Furthermore, employing serviceoriented architecture and
resource virtualization technology, cloud provides the highest
scalability for enterprise applications having variant load. This
feature of cloud is the main attraction for migration of workflows to
the cloud. A workflow is comprising of several tasks communicating each
other through network by sending various data volumes. The communication
between tasks in a workflow sometimes turns into main bottleneck in
migration of workflows to the cloud. In order to resolve this problem,
we partition the workflow and share data files between tasks
communicating each other more often.
In addition, these tasks perform different
operations, requiring different amount of resources. Since each task of a
workflow requires different processing power to perform its operation,
at time of load variation it must scale in a manner fulfilling its
specific requirements the most. Scaling can be done manually, provided
that the load change periods are deterministic, or automatically, when
there are unpredicted load spikes and slopes in the workload. Enterprise
application providers always look for the method that, minimizing
resource consumption cost and handling maximum load, guarantees Quality
of Service(Qos) expected from them. A number of autoscaling policies
have been proposed so far. Some of these methods try to predict next
incoming loads, while others tend to react to the incoming load at its
arrival time and change the resource setup based on the real load rate
rather than predicted one. However, in both methods there is need for an
optimal resource provisioning policy that determines how many servers
must be added to or removed from the system in order to fulfill the load
while minimizing the cost. Current methods in this field take into
account several of related parameters such as incoming workload, CPU
usage of servers, network bandwidth, response time, processing power and
cost of the servers. Nevertheless, none of them incorporates the life
duration of a running server, the metric that can contribute to finding
the most optimal policy. This parameter finds importance when the
scaling algorithm tries to optimize the cost with employing a spectrum
of various instance types featuring different processing powers and
costs. We have created a generic LP(linear programming) model that takes
into account all major factors involved in scaling including periodic
cost, configuration cost and processing power of each instance type,
instance count limit of clouds, and also life duration of each instance
with customizable level of precision, and outputs an optimal combination
of possible instance types suiting each task of a workflow the most.
Our goal is to build a framework based on our LP model and workflow
partitioning for optimal migration of any enterprise
applications/workflows to the cloud.
S. N. Srirama, A. Ostovar: Optimal Resource Provisioning for Scaling
Enterprise Applications on the Cloud, The 6th IEEE International
Conference on Cloud Computing Technology and Science (CloudCom-2014),
December 15-18, 2014, pp. 262-271. IEEE.