Main Menu
News

Lecture Review: 5G Cloud, AIoT and Edge Computing

  • 2020.06.30
  • News
On June 24, Professor Kai Hwang gave a lecture on 5G Cloud, AIoT and Edge Computing. Here is a review of the lecture.

On June 24, Professor Kai Hwang of the Research Center for Internet of Things and Smart Cloud brought us a lecture on the topic of 5G Cloud, AIoT and Edge Computing, Professor Hwang is the Presidential Chair Professor at The Chinese University of Hong Kong (Shenzhen) and the Life Fellow of IEEE. Prof. Li Shipeng, Executive President of AIRS, acted as the host of the lecture.

Professor Hwang first introduced the latest developments in 5G, Internet of Things, edge computing and Beidou system. 5G technology, the fifth-generation mobile communication technology, has the characteristics of low latency and high speed. Through 5G technology, we can realize the connection between the communication network and the Internet of Things, edge computing and satellite communication technology, and go further to the integration of multiple technologies such as social networking, mobile network, big data analysis, cloud computing, and Internet of Things, achieving the SMACT (Social, Mobile, Analytics, Cloud, IoT), and finally reach to a world of interconnection of everything and intelligentization of everything.

Figure.1: SMACT(Social, Mobile, Analytics, Cloud, IoT)(Ref: K. David and H.Berndt, ”6G Vision and Require-ments”, IEEE Vehicle Ttechnology. Magazine, Sept. 2018)

Professor Hwang then talked about the theory and development of AI chips, industrial internet, blockchain and AIoT (Intelligent Internet of Things). The generation and development of the above technologies has given rise to a large number of AI application scenarios in the IoT field. In order to enable AI technologies to be applied to different applications, we need a chip for AI computing that is different from traditional CPU and GPU. Professor Hwang introduced a new kind of AI processor: IPU (Intelligent Processing Unit). The chip was designed by Graphcore, and the mass-produced IPU type is GC2. There are 1216 IPU Tiles inside the processor. Each Tile has independent IPU core which is used for calculation and In Processor Memory which is the memory within the processor. The entire GC2 processor has 7296 threads in total, which can support 7296 programs running in parallel. IPU overcomes the bottleneck of storage wall of AI chips through a distributed on chip storage architecture. Some applications of IPU have covered various fields of machine learning, including natural language processing, image/video processing, timing analysis, recommendation/ranking, and probability models. Excellent performance makes IPU have the opportunity to become important in the application of AIoT technology in the future.

Figure.2: Compare of IPU, CPU, and GPU(Ref: https://www.graphcore.ai/products/ipu

After analyzing various related technologies, Professor Hwang shared the outlook for the era of Internet of Everything. 2020 will be a year of integration of Internet of Things technology, cloud computing technology, and blockchain technology, and the integration of technologies will also promote the improvement of the overall technological level. In the era of Internet of Everything, we will realize various applications such as smart home, smart city, smart transportation, and smart medical treatment through the Internet of Things and 5G technology to enhance the level of social intelligence.

Figure.3: The Internet of Everything(Ref: Prof. Jainlong Cao, HKUT, 2018)

Next, Professor Hwang introduced the Research Center for Internet of Things and Smart Cloud. The Center was established in 2019. It is affiliated to the Shenzhen Institute of Artificial Intelligence and Robotics for Society (AIRS) and is committed to integrating high-performance clusters, intelligent clouds and Internet of Things technologies to build a large-scale intelligent manufacturing cloud and industrial data center, connected with the industrial circles of Great Bay Area.

The intelligent industrial cloud and data center established by the research center will integrate technologies such as intelligent robotics, Internet of Things, NB-IoT, 5G and satellite networks to provide AIRS’s research team with the support in artificial intelligence, cognitive computing, high-performance computing and other aspects. At the same time, it also intends to promote the development of industrial Internet of Things perception and big data cognition in the Greater Bay Area, and help the industry innovate smart products to promote the transformation of smart supply chains, thereby greatly improving industrial and economic benefits.

The cloud platform will be completed in August 2020. The platform will include cloud computing modules, big data modules, artificial intelligence modules, and 5G edge computing modules. It can provide 1520 CPU cores, 18688GB memory, 680TB Storage (including dual copy backup), 1169.3Tflops + 2375T flops Tensor computing power, and undertake a variety of tasks such as big data computing, cloud analysis, artificial intelligence computing, edge computing, and 5G and IoT communication experiments.

Figure 4: AIRS cloud system architecture based on Inspur server cluster

Professor Hwang proposed that this cloud platform could conduct integration experiments on technologies such as intelligent cloud, artificial intelligence chips, edge perception devices and the Internet of Things, tackling the problem of high-tech information system integration, promoting the integration, application and development of industrial intelligent cloud and industrial Internet of Things technologies in the Greater Bay Area. At present, the cloud platform has completed the construction of cloud computing modules, big data modules, and artificial intelligence modules, and has been able to perform large-scale data storage and computing, large-scale implementation of artificial intelligence. Through this platform, the research team has published many papers and monographs in the fields of cloud computing, IoT, edge computing, and artificial intelligence, and proposed a variety of new theories and architectures.

In terms of security, most cloud computing or edge computing platforms are deployed on platforms with Linux, so attacks on Linux will pose a serious threat to the entire platform. In response to this situation, Dr. Yonggang Li proposed a new Virtual Wall technology to filter out rootkit attacks in Linux systems. Rootkit is a special kind of malware. Attackers can hide their malicious programs through the rootkit. The Virtual Wall proposed by the research center can set its own security strategy, detect and track rootkit events in the host mode in time, classify and track potential rootkit attacks, and make meaningful filtering decisions. The proposed Virtual Wall technology can protect all Linux servers built into modern cloud systems, data centers and supercomputers, and can provide long-term protection for modern 5G IoT and edge computing tasks based on cloud platforms. This result will be published in the journal IEEE Trans. Computers. [1]

To promote the intelligent development of the supply chain in the Greater Bay Area, Li Yuejin, a doctoral student at the research center, has proposed an industrial supply chain management theory based on intelligent cloud platform, which combines financial engineering, cloud computing, big data analysis, and blockchain technology. Industrial enterprises can upload their own logistics information and production information to the cloud platform in real time. The cloud platform can intelligently analyze the uploaded information. The results of the analysis can help companies manage and optimize the supply chain. At the same time, the cloud platform can generate an intelligence contract. Smart contracts can use blockchain technology for encrypted transmission, related financial groups can use smart contracts to analyze the logistics network and the strength of the enterprise, which can improve financing efficiency. The AIRS Cloud platform combined with this technology can promote the logistics modernization of enterprises in the Bay Area and help various enterprises in the Bay Area to develop financing.

With the establishment of AIRS Cloud, there will be a variety of high-tech issues to be studied on the platform in the future. Professor Hwang introduced a plan that has been running on the platform: using the resource pool in the cloud platform related to deep learning for robotics vision training in AIRS Cloud. Due to the increased complexity of models and data, machine vision training based on deep learning needs to consume more and more computing resources. The current AIRS Cloud artificial intelligence module has completed hardware construction and software install, and deep learning training tasks can be run normally in the AI Station module in it. As shown in Figure 5, the completed AIRS Cloud will contain many different types of resource pools at the same time, not only traditional Iaas and BDAaS resource pools, but also edge computing resource pools, which can apply deep learning tasks in varies scenarios, such as the Internet of Things and edge computing, with the computing power support provided by AIRS Cloud.

Figure 5: Deep learning training process based on AIRS Cloud

At the end of the lecture, Professor Hwang Kai interacted with the audience who participated online and offline, and answered questions that audience proposed.

Video Archive