Robots, Autonomous Driving Vehicles, and Maps There is a kind of data that you must have for robots to be able to move about in our everyday spaces and for autonomous driving vehicles to be able to move about safely on our roadways. It is “maps.” Actually, these maps we’re talking about are a bit different than the map apps or navigation systems we are familiar with. They are machine readable, 3D/HD maps. These maps play an extremely important role for robots or autonomous driving vehicles. This is because robots or autonomous driving vehicles rely on these maps for location recognition and route planning, things that humans can do naturally. This is why HD maps are called a part of the brain of autonomous driving vehicles. Therefore, NAVER LABS is continuously developing a solution for creating 3D/HD maps. We are creating HD maps both indoors, using the mapping robot M1, and on the roadways, through the mobile mapping system R1 and aerial maps. However, there is still one more problem that needs to be solved. Updates. The form of the world is always changing. Therefore, for maps, keeping them up-to-date is akin to accuracy. Maps for robots or autonomous driving vehicles are no different. At NAVER LABS, as well, techniques to help with this problem are being researched, utilizing robots, AI, MMS (mobile mapping system), etc. Technology where Robots and AI Find Changed Shop Names Last year, we developed “self-updating map” technology that automatically discovers changes in shops within large-scale indoor spaces. Robots move about expansive and complex commercial spaces and accurately pick out changed shop names. To automatically analyze the images collected by the robots, computer vision and deep learning technology was also utilized. But since shopping malls are filled with so much visual information, it was very important to be able to differentiate from advertisements, people walking about, etc. and accurately perceive information about the shops. The algorithm developed at NAVER LABS to achieve this can very accurately perceive when a shop has newly opened, closed, or changed, or when just the name of a shop has changed, and the results of this have been presented at a computer vision/pattern recognition (CVPR) conference. Technology that Automatically Updates HD Road Maps This year, we are progressing with the ACROSS project to expand this type of updating technology to our roadways. Of course, the environment and conditions are very different from those indoors. ACROSS utilizes a method where mapping devices made up of low-cost sensors are installed on multiples vehicles which then all simultaneously identify changes in roadway information. The image data collected by the mapping devices is likewise analyzed by AI. It detects changes in the existing HD maps’ road layout (lane information, stop line locations, road markers, etc.) or 3D information (traffic signs, buildings, traffic lights, street lights, etc.). In reality, it must also sense changes in the seasons, time, and weather as well, and also be able to distinguish well between the cars on the roads. It is a task that is challenging in many ways, but we are continuously figuring things out. In the future, robots and autonomous driving technology will slowly break free from the lab and permeate our lives. To accomplish such an end, two things we have to prepare are HD map creation technology and an updating solution. To be more accurate, and always up-to-date! We are researching technology to accomplish this end.
NAVER's first robot was M1. Having made its debut in 2016, M1 is a mapping robot that creates three-dimensional high-precision maps of indoor space. Who were the maps for? They were for other self-driving robots. Uploading maps created by M1 to the cloud connected them in real time for other robots to perform autonomous driving. We have introduced this type of new self-driving robot platform under the name AROUND, and have continuously displayed the achievements. In short, M1 is the starting point for the cloud-based autonomous driving robot platform. Three-dimensional maps made by M1 also acts as the core data for a variety of location-based technologies. It also acts as the basic data for the self-updating map technology that updates indoor maps using visual localization, AR navigation, or robots and AI, ascertaining current indoor positioning where GPS is not available. It holds high application value in several forms, which is why we have diligently kept the version up to date. Upgraded M1 mapping technology –M1X mapping robot M1X is the successor to M1. While integrating the mapping technologies across the two versions, we were able to secure higher expandability and high-quality output even while streamlining expenses for the equipment. From the hardware aspect as well, we enhanced driving stability to enable spatial scanning while moving without wobbling, and made improvements to the issues of vibrations, which allow us to obtain much higher quality data. With an optimized sensor configuration, we were able to raise the quality of data even while reducing robot production costs, resulting in improvements to robot localization accuracy by over 30% when applied. If M1 mainly collected data optimized for self-driving robots, M1X is collecting data from more diverse sensors to enable application with a variety of indoor driving machines. The data from M1X is currently applied not only to self-driving robots, but also to higher accuracy localization services like AR navigation in smart phones. The technology to initiate space-based services In the upgrading processes of mapping technologies from M1 to M1X, important outputs have been obtained, such as indoor localization technology, the self-driving robot platform and AR navigation. Right now we are investing great amounts of time and effort for the technology to obtain data on everyday spaces even faster and without error because such innovative mapping technology marks the new beginning for all space-based services. > Subscribe to our newsletter
Professor Kim Sangbae, Director of the Biomimetic Robotics Lab at the Massachusetts Institute of Technology joined NAVER LABS as Technical Consultant on July 1. Professor Kim Sangbae, a distinguished robotics engineer, has developed robots such as the MIT Cheetah 1/2/3, Mini Cheetah, Hermes and Meshworm, and Kim’s Stickybot was even featured in Time Magazine as one of the Best Inventions in 2006. Currently, MIT Biomimetic Robotics Lab, led by Professor Kim Sangbae, and NAVER LABS are maintaining a relationship of continuous industry-academia cooperation. In particular, the MIT Cheetah 3 and Mini Cheetah, developed through the industry-academia cooperation with NAVER LABS, will be used as a technology to solve the mobility problems of robots in various areas including sidewalks. “There is no need for robots to do things that humans are well capable of doing. There are other areas that are suitable for robots.” “I think robots can play a big role in solving society’s impending problems, such as the declining workforce. Mobility is essential for such physical services.” - Professor Kim Sangbae, a quote from his NAVER seminar lecture The recruitment of a technical consultant is aimed to further organically strengthen cooperation with NAVER LABS. In particular, the focus of NAVER LABS on technology that provides practical help to people in various everyday spaces even correspond with each other even corresponds with the research philosophy of Professor Kim Sangbae. There will be very new and substantial synergies to accelerate our technology roadmap, including technical cooperation for designing and controlling systems/mechanisms, cross-training of engineers through human/academic exchanges, and discovering of talented individuals. > Subscribe to our newsletter
A-CITY is the future vision for cities pursued by NAVER LABS technologies. We research the technologies for a city where every city space is connected by diverse autonomous machines, artificial intelligence analyzes vast amounts of data to make predictions, and spatial data is informatized and updated to automate even services such as delivery and logistics. In order to achieve this, NAVER LABS is gathering a wide array of spatial data that comprise city spaces to make HD maps for the machines, and also developing an intelligent autonomous machine platform capable of transformation according to place, environment or purpose. We are also researching natural human-machine interaction (HMI) with the goal of providing useful services to people in everyday spaces. These are the core technologies where NAVER LABS is currently focusing on to make advancements and accelerate the arrival of A-CITY, a future vision for cities. M1 mapping robot, the beginning of indoor autonomous driving M1 is a mapping robot that produces high-precision 3D maps of indoor spaces. HD maps, made by applying SLAM technology with the point cloud collected by LiDAR, are used as the core data for diverse position-based services including indoor autonomous service robots. We are currently expanding data usability even more while also increasing accuracy via M1X, an upgraded version of M1. See more on M1 The core data for road-level autonomous driving: Hybrid HD mapping Hybrid HD mapping, an original HD map production solution for autonomous driving machines, extracts the layout information of road surfaces from aerial images that capture large-scale urban areas. Using the method of organically combining data gathered by R1, our internally developed mobile mapping system (MMS), it is effective for the quick and accurate production of HD maps of extensive areas. See more on Hybrid HD mapping Technologies for automatically updating HD maps For machine-readable maps, being up to date is of utmost importance. At NAVER LABS, we are conducting research for the ACROSS project for HD maps and the self-updating map technology for indoor maps. ACROSS is a technology that senses changes in the road layout (lane information, stop line location, road markers, etc.) and 3D information (traffic signs, buildings, signals, light poles, etc.) using devices equipped on numerous vehicles. The self-updating map is a technology that automatically recognizes changes in points of interest (POI) for large-scale shopping malls via AI and autonomous robots. See more on the ACROSS project See more on the self-updating map technology Mapping and localization for sidewalks with irregular surfaces and environments Sidewalks, which can be seen as the middle ground between indoor and road areas, are highly influenced by changes in the seasons and weather. That is why we at NAVER LABS are conducting a project called COMET for the development of mapping and localization for sidewalks. We are producing devices with a sensor arrangement that suits the sidewalk environment, and are also developing algorithms to process the data acquired by this mapping equipment. Although people may equip and test it in the short term, the technology is planned for future testing to enable direct acquisition of data by a four-legged walking robot that will be able to move around on diverse street surfaces. Cheetah 3 and Mini Cheetah, developed by MIT with funding from NAVER LABS, will be utilized. “R2D2: Reliable and Repeatable Detectors and Descriptors for Joint Sparse Keypoint Detection and Local Feature Extraction,” a visual localization research being conducted in NAVER LABS Europe that accurately ascertains specific locations in spite of environmental changes such as weather, seasons, time and lighting, also boasts highly innovative technology and won 1st place in the Local Feature Challenge of Long-Term Visual Localization at CVPR 2019. See more on the R2D2 project Seamless road-level precision localization technology NAVER LABS is researching technologies for autonomous driving machines to precisely estimate their own positions in real time even in the complex environments of cities. We applied our internally developed HD maps, like a virtual sensor, in order to perform seamless and stable localization even in areas such as dense buildings or tunnels where GPS is unreliable, and are advancing the technology to extract the most accurate coordinates by compiling the information acquired from the various sensors such as LiDAR, cameras, IMU and wheel encoders. See more on HD map aided localization VL technology, recognizing location indoors using just one photo Visual localization (VL) is a technology that analyzes an image to recognize the current location. It can ascertain the current position with high precision even indoors where GPS is not available. The VL technology by NAVER LABS retains the highest level of global competitiveness as a solution for recognizing location by extracting and comparing characteristic points from 3D data captured by the M1. This technology is currently applied for the indoor self-driving robot platform, and aside from VL, we are also concurrently developing AR technology that combines VIO (visual-inertial odometry) that tracks position by analyzing sensor and video data, VOT (visual object tracking) that recognizes objects and estimates the position or direction using 6DOF (six degrees of freedom), and other technologies. AR is also a very important technology for utilizing the space itself as an interface. See more on the VL technology AMBIDEX, a robotic arm that is anatomically analogous to a human arm Directly providing services to people in everyday spaces requires a robot arm that is both capable of high-precision motions while simultaneously ensuring safety. AMBIDEX, developed via industry-university collaboration between NAVER LABS and KOREATECH, is a robotic arm that can interact safely with people using an innovative wire-structure power transfer mechanism. We also added a waist section to expand the radius of activity. We are concurrently researching reinforcement learning via simulators and other processes to enable the performance of smarter and more precise service scenarios. See more on AMBIDEX An on-road machine platform for autonomous driving machines on the road After becoming the first IT business to receive the permit for provisional autonomous driving operation from the Ministry of Land, Infrastructure and Transport in 2017, NAVER LABS is advancing autonomous driving technology in all areas, from localization on actual roads to perception, planning and control. We are also producing HD maps for autonomous driving on roads using hybrid HD mapping and ACROSS solutions. Integrating these technologies and data, we are developing an autonomous driving machine platform that will allow customization for a variety of purposes such as logistics, delivery and unmanned shops. See more on the NAVER LABS autonomous driving technology Autonomous driving via the map cloud and reinforcement learning: the AROUND platform The AROUND platform is the independent solution developed by NAVER LABS with the goal of popularizing self-driving service robots. It identifies the location and plans a route based on the map cloud made by the M1 mapping robot, and has the unique characteristic of enabling smooth autonomous driving, without the help of laser scanners, by applying deep reinforcement learning algorithm. High-accuracy indoor autonomous driving is achieved using only low-cost sensors and low processing power, as opposed to the many pre-existing self-driving robots in which the core functions such as map creation, position checking, route creation, obstacle avoidance, etc. must be performed directly. See more on AROUND New possibilities for robotic services: the 5G brainless robot platform NAVER LABS succeeded in the first-ever 5G brainless robot demonstration at CES 2019. This technology involves moving the computer that serves the role of the robot's brain to the cloud and connecting with it via 5G. It effectively reduces production costs by enabling simultaneous control of numerous robots, and because the cloud serves the role of the robot's brain, robots can be made in small sizes with superior intelligence. We are expanding this technology through research in connection with the AROUND platform. Through all of this, the NAVER Data Center Gak strives to become the brain for numerous robots and to provide robotic services in various ways. See more on the 5G brainless robot technology > Subscribe to our newsletter
Technologies that connect NAVER with everyday physical spaces NAVER LABS held a press conference on June 25. After the recent CES 2019, it was an opportunity to reveal what kinds of missions and roadmaps are used in the development of the current technological advancements taking place at NAVER LABS. The vision for the future of NAVER LABS’ technologies presented on this day was summarized as “Connect NAVER to physical world.” The background for this direction originates from the rapid blending of the boundaries between physical and virtual spaces with technologies, such as high-performance sensors, AI, robots, and autonomous driving, each gradually approaching the critical point of popularization. Although NAVER has grown to be the dominant force of the online virtual space over the past 20 years, if NAVER is to continue to sustain its core value of connecting information and services, it is imperative to expand on modes, channels and methods. On this topic, NAVER LABS CEO Seok Sangok announced that the very space where users live may soon become a service platform, and NAVER LABS is concentrating on research for robotics, AI, autonomous driving, AR, and HMI (human-machine interaction) to enable the natural linking of physical spaces and NAVER services. The 3rd infrastructure: Auto-movables CEO Seok presented the concept of self-moving spaces, or auto-movables, for changing the city of the future. Seok emphasized that it will not be a matter of movables or immovables, but rather a third type of infrastructure that will greatly change the way we live, and that in the near future it will be these auto-movable spaces, equipped with information, services, and products, that will create entirely new connections for us. He also revealed the roadmap for the technological realization of these concepts. In order to provide information and services in a physical space, the very first thing you must do is to precisely digitize the space in question, and the actual provision of information and services will require tools with the same physical characteristics. NAVER LABS plans first of all to develop the solutions for building and updating 3D precision data for all spaces—indoor, outdoor and road-level—and to complete the autonomous machine platform that can move on its own in various spaces to provide information and services based on the technology and data of HD mapping, localization, 5G cloud computing, 3D vision, etc. It was also revealed that they plan to raise the completeness of HMI (human-machine interaction) for the natural provision of services among people in everyday spaces. A-CITY, the vision for a future city undertaken by the technologies of NAVER LABS 'A-CITY' is the vision for a future city that NAVER LABS is forming through this kind of technological road map. Each urban space will be closely connected by a variety of autonomous machines, with artificial intelligence analyzing massive data to make predictions, and the processes to informationalize the data, and even services like delivery and distribution will be automated. CEO Seok explained that, “NAVER LABS is taking on the challenge of A-CITY as a concept, but it is an inevitable future, and it is not limited to simple services like shipping and logistics, but is going to change the way we live with tremendous possibilities.” The smart autonomous machine platform that will connect all spaces: indoors-outdoors-roads CEO Seok introduced the solutions by NAVER LABS for the 3D space data that forms the most fundamental base for service robots, autonomous driving, AR, etc. Large-scale indoor spaces are covered by the M1 mapping robot. The upgraded version, M1X, has been developed to a level enabling completion of point cloud mapping within about 40 hours following wide-range scanning of massive indoor space. Next, the plan regarding sidewalks, regarded as the middle ground between roads and indoor areas, was also revealed. A project called ‘COMET’ is underway for the development of technology for mapping and localization on sidewalks, which have uneven surfaces and are highly influenced by lighting and the seasons. NAVER LABS, in collaboration with MIT, revealed that they plan to later apply this mapping equipment and algorithm to the four-legged walking robots MIT Cheetah 3 and the MIT Mini Cheetah. Following CEO Seok, Leader Peck Jongyoon, who introduced core technologies for autonomous driving from NAVER LABS, emphasized the unique characteristics of the road environment. Although sidewalks can be seen as spaces connected with indoor areas, roads differ completely by the traffic signal system, safety standards, etc. Having demonstrated the road autonomous driving technology as a “combined art” where technology from a variety of fields must be solved including mapping, localization, perception, prediction and planning, Peck stated that an HD map takes an especially important role in autonomous driving within downtown areas where there are numerous GPS shadow zones. The sensor data from the HD maps and GPS, LiDAR, camera, etc., all set up in-house, is being combined for the advancement of localization technology that offers very high precision while also working seamlessly. Leader Peck also introduced the hybrid HD mapping technology that utilizes the internally developed MMS vehicle, R1, and aerial photos to make a HD map for wide areas, announcing that within the year they will complete a layout map of 2,000km of main 4-lane or larger roadways in downtown Seoul. Starting with the current 300km section across the major areas including all of Gangnam-gu, Yeouido, the Sangam area, and Magok, the plan is to rapidly set up road layout data to cover the entire Seoul metropolitan area. The automation algorithm enabling road surface recognition through deep learning and vision technology to do this was also introduced along with research on the crowd-sourced mapping format HD map updating solution called ACROSS. Next, Peck announced that “the plan is to increase the number of vehicles with temporary permits for autonomous driving from the Ministry of Land, Infrastructure and Transport (MOLIT) in order to accelerate the development of technology for autonomous driving on the road,” and that the “goal is to later develop an autonomous machine for the road that can be utilized for a variety of purposes including distribution, delivery, and unmanned shops through algorithms and data verified in the actual roadway environment.” Introducing original, world-class core technologies CEO Seok Sangok and Leader Peck Jongyoon gave summaries of the world-class core solutions and competencies held by NAVER LABS. These included the innovative solutions for creating machine-readable HD maps for indoors and road level, VL (visual localization) technology enabling location verification in places without access to GPS using just a photo, the AROUND platform that allows smooth autonomous driving without a laser scanner through the map cloud and reinforcement learning, brainless robot technology utilizing 5G ultra-low latency characteristics, and the AMBIDEX robot arm with 7 degrees of freedom (DoF) and an added 3-axis waist section. Of all of these, CEO Seok revealed that the combination of the 5G brainless robot technology and the map-cloud-based AROUND platform, which gained high interest at CES 2019, is one of this year’s crucial missions. It is a strategy that, through collaboration with Qualcomm, commits to maximizing performance and usability by applying the technology of the world's first successful 5G brainless robot demonstration to the autonomous driving robot platform. The “5G brainless robot” technology, which moves the computer that serves the role of the robot's brain to the cloud and connects with it via 5G, can control a large number of robots simultaneously and thus effectively reduce manufacturing costs, and because the cloud replaces the robot's brain function it enables the creation of small robots with exceptional intelligence. Along with Qualcomm, the NAVER Business platform, KT, and others are collaborating to set up this platform, and it was announced that the NAVER Data Center Gak in Chuncheon will be preparing to serve as the brain for a variety of service robots. The VL technology, which can recognize the current location indoors using a photo, was also emphasized as holding the highest level of global competitiveness. This technology, which recognizes the location by extracting and comparing the unique elements from 3D data captured by the M1, offers groundbreaking solutions to the problem of localization indoors without GPS access. Apart from VL, it was revealed that AR technology is also in development, coupling with VIO (visual-inertial odometry), which analyzes sensor and video data to track location, and VOT (visual object tracking) technology, which recognizes objects and estimates position or direction using 6 DoF (forward, back, up, down, left, right). The role of NAVER LABS is to prepare for the foreseeable future using technology The roles of NAVER LABS as introduced by CEO Seok in this presentation were expressed as having accumulated rapidly because the technologies do not stay in the lab, but are rather oriented toward the spaces of our real lives. He hopes that technology will offer practical assistance to people in a variety of environments, and introduced the collaboration for this with the Seoul National University College of Nursing and the social enterprise ‘Bear Better’, adding that the open research environment where diverse experts can cooperate closely is also an advantage of NAVER LABS. Leader Peck Jongyoon responded to the questions following the press conference by saying, “We are developing our technologies with a sense of mission. The future is clearly approaching, but there are still not many places domestically that are passionately preparing for it.” He emphasized the importance of the core technologies currently being researched, stating, “I believe that, if we do not prepare for the future, the unfortunate circumstance may someday occur where we have no choice but to use the technologies of other countries.” In closing, CEO Seok Sangok stated that, “We are taking the lead in preparing for a future where our everyday physical spaces are recreated as new service spaces, within which spaces, machines, information and services are all naturally interconnected,” and conveyed that the goal is to “make the present that we feel is so normal to feel like an inconvenience of the past through technology.” > Subscribe to our newsletter
Cities, especially their features, changed greatly following the advent of the modern elevator in the 19th century. Going beyond the limits of flat land, high-rise architecture gave people a wholly new everyday space. The interesting history of innovations in locomotion has contributed to the transformation of our lives. NAVER LABS, having conducted research on technologies including autonomous driving, robotics and AI, is now concentrating on new concepts that will change the cities of our future. This is none other than the concept of self-moving spaces or auto-movables. In the near future, auto-movables will provide information, services, products and more, creating new connections. Imagine a city where each urban space is closely connected by a variety of autonomous machines and where artificial intelligence makes predictions by analyzing vast amounts of data, while processes of data informatization for spaces and even services such as delivery and logistics are all automated. We have given this future city the name A-CITY. This is the future that the technologies of NAVER LABS are now aspiring to achieve. Autonomous Everywhere–HD maps for machines The first phase to speed up the future coming of A-CITY is the task of making HD maps for machines. Such maps are the most basic data to allow the autonomous machines to move freely. Various spaces within a city have differing conditions for implementing autonomous driving technology. That is why a wide array of technologies must be applied to make HD maps for every space. Mapping robots called M1 will be tasked with covering large-scale indoor spaces like shopping malls. In order to quickly and precisely make massive HD maps of the roads on a metropolitan scale, we developed an original solution called “Hybrid HD mapping.” We are also concurrently developing a mapping technology for sidewalks where road surfaces are uneven and non-uniform. Updates are also important as cities continuously change their form. That is why we have developed a self-updating map technology through robotics and AI to ascertain changes in indoor spaces, and are currently conducting the ACROSS project for updating road-level data. The technology to mark every city space into HD maps with seamless connections—this is the foundation of A-CITY. Autonomous Everything–Intelligent autonomous machine platform Offering services where autonomous machines are useful requires a great deal of technology working in unison. For example, at NAVER LABS we are researching four-legged robots’ locomotion on uneven and non-uniform roads, artificial intelligence for smart robot services, robot arms to provide services directly to people, and even AR technology that uses the space itself as the interface. Even among these, the technology for high-precision localization to recognize current positioning is crucial. Although it may look easy to ascertain current positions, there are numerous environmental restrictions in the case of machines. For example, surely the very first thing that comes to mind when we say positioning is GPS. However, GPS does not work indoors. Even outdoors, we can experience intermittent interference in the concrete jungle of buildings in a populated area. We are researching a wide array of localization technologies here at NAVER LABS to overcome these types of issues. We are converging high-precision road-level localization linked to HD mapping, such as the technology to find location based on just one photo, to accurately recognize the location and enable effective route planning from wherever you are. At NAVER LABS we are also concentrating on 5G and cloud computing as an important solution for the popularization of robot services. The technology to move the robot’s computer, which serve cerebral functions, to the cloud and connect using “5G brainless robot” technology enables effective cost reductions by controlling numerous robots simultaneously. Since the cloud takes the role of the robot’s brain, it becomes possible to make smaller robots with outstanding intelligence. This is why we see this technology as the primer for the popularization of self-driving robots. Outside, however, the road-level autonomous driving technology works in a bit more special environment. There are set traffic rules to be abided and various signals to be read. Following the first acquisition by an IT company of the permit for provisional autonomous driving operation from the Ministry of Land, Infrastructure and Transport in 2017, NAVER LABS has been collectively advancing every area of autonomous driving technologies for roads including HD mapping, high-precision localization, recognition, planning, and control, all on actual roadways. We are forming an intelligent autonomous machine platform that combines all of the high-precision localization, 5G and cloud computing, and other various types of autonomous driving and robotic technologies. This platform will eliminate people’s concerns of technologies needed for the achievement of autonomous driving in all urban spaces. Instead, it will allow us to focus entirely on finding and designing new and valuable experiences to include in auto-movable spaces. Autonomous Everyday–New connections permeating our everyday lives Our essential goal is not to keep future technologies in the lab, but to infuse them into the spaces of people’s everyday lives. Interactions with users must feel extremely natural, with hardware that is dependable to allow error-free daily operations, and software with adequate testing to ensure that no erratic operations occur. Not only that, but the core algorithms that serve the machines’ intelligence role must be fully optimized, and spatial data must also be kept up to date. Although these are certainly not easy tasks, it is nonetheless the future that is sure to come. However, the technologies that we are researching are not only for certain special services. Just as the emergence of the elevator gave birth to unprecedented new living spaces in the form of high-rise buildings and as mobile technology brings variations with all new service experiences, the technology connecting auto-movable spaces will be expanded to include even more possibilities than we can currently imagine. A-CITY, to be filled with completely new connections—this blueprint, though still unfamiliar to us, will someday become a normal part of our daily lives. We are fully devoted to researching the core technologies that will make that day come sooner. > Subscribe to our newsletter
NAVER LABS presented its paper “Did it change? Learning to Detect Point-Of-Interest Changes for Proactive Map Updates” at CVPR 2019, the world’s largest conference on computer vision and pattern recognition, sponsored by IEEE. The paper imparted the research results of the self-updating map conducted jointly by NAVER LABS and the European research team of NAVER LABS over a period of about one year. Core technologies of NAVER LABS, such as robotics, computer vision and deep learning, were utilized to update map information to the latest state by having autonomous robots collect and analyze the data of large indoor spaces and recognize the spatial changes. Meanwhile, NAVER LABS Europe ranked first in the “Local Feature Challenge” category of the "Long-Term Visual Localization challenge". The challenge was to determine the current location of the nighttime photograph based on the daytime photograph and the shooting location of a particular landmark. This time, the Europe researchers of NAVER LABS successfully developed a deep learning-based feature that surpasses the scale-invariant feature transform (SIFT) feature that has been used for nearly 20 years in the field of local feature detection. Henceforth, it is expected to be applicable to various fields related to computer vision other than just visual localization. Related articles and websites A Self-Updating Map: Technology through which AI and Robots find changed signboard NAVER LABS' Indoor Dataset - COEX POI Change Detection (Jun. 2018 and Sep. 2018) CVPR 2019 Workshop: Long-Term Visual Localization under Changing Conditions Paper URL > Subscribe to our newsletter
NAVER LABS' indoor dataset is the result of scanning COEX, one of the largest shopping malls in Korea, twice at an interval of about two months (Jun. 2018 and Sep. 2018). This dataset consists of 17.5K geo-localized images with 578 points of interest (POIs) captured by a device called Pumpkin that has two LiDARs and multiple cameras. We currently only provide images taken by Pumpkin’s left and right side cameras, which are designed to capture storefront images that can be used for POI recognition and change detection tasks. In the near future, we will be releasing other images taken by other camera types as well, so this dataset will also allow use for VSLAM and visual localization research. Downloads COEX POI Change Detection dataset Scanning device: Pumpkin Pumpkin is equipped with the following main sensors: Cameras: 6 x Sony RX0 (2 with Wide Angle Lens: Samyang Fisheye Lens), 2400x1600, 2Hz, Anti-Distortion Shutter — 1/32000 super-high-speed shutter, ZEISS Tessar T* Lens, 84° FoV (Samyang Fisheye Lens: 106° HFoV, 70° VFoV) LIDAR: 1 x Velodyne Puck 16-channels Lidar, 360° HFoV, 30° VFoV, 4 planes, 10 Hz, 100m range, 0.1~0.4° Vertical resolution, 2.0° Horizontal resolution, Sensor Location Data format This dataset consists of images and their poses. The name of each image includes the serial number and timestamp as '[serial #]_[timestamp].jpg'. The poses where all images are acquired are in a separate file, 'sensor_trajectory.hdf'. In this file, 7-degrees-of-freedom (DoF) poses for all of the images are recorded. 7-DoF states are 'x, y, z' for position and 'qw, qx, qy, qz' for orientation, in serial order. In addition, each of the two tabs, pose and stamp, are paired, and the pose for the n-th stamp is the n-th in the pose tab. If you are more familiar with '.json' than '.hdf', you can download the file to convert it. How to generate data Data acquisition All of the images of this dataset were acquired by Pumpkin. To collect as much data as possible, we acquired images periodically and without stopping instead of by stop-and-go motion. As referred above, because RX0 has an anti-distortion shutter, we assumed that there is no distortion by movement. All of the data including point clouds and images were recorded based on the same timeline under the UNIX timestamp of the main processor. Estimating image pose For accurate pose estimation when each image was acquired, LiDAR-based SLAM was performed. However, since the acquisition from LiDAR and cameras didn't happen at the same time (i.e. asynchronized), linear interpolation based on timestamp gave the pose of Pumpkin when the image had been acquired. The pose of each image could be calculated from the relationship between the base of Pumpkin and each camera, and the pose was tagged for each image. Blurring To publish the dataset, we blurred faces in images with our object detection model. The model was trained by the data from Naver Street View, which includes face annotation. We ran the model on our images to localize the faces, and applied a median filter to blur the objects. The remaining faces that the model failed to localize were handled manually.
When GPS cannot be reached We can see our location just by turning on a navigation device or map app. We are used to this. This is thanks mostly to the GPS. But what about indoors? Verifying a location indoors with no GPS signal remains a troublesome task. This is a problem that can be solved with a guide service or an indoor self-driving robot. The technologies and infrastructure do exist. Let’s take a look at what technologies NAVER LABS is using to find solutions. A technology where just a photo is enough to recognize a location We have raised the bar for visual localization (VL) technology. VL is a technology that determines a location using an image. In a way, this resembles our own daily experiences. People also view their surroundings with their eyes to identify where they are at any given moment. Of course, the scenery is a bit different from what people see. VL looks for distinct features in the image to identify positional information. Visual localization demo At NAVER LABS, we use the mapping robot M1. We extract distinct features from the data filmed by M1 to produce a “features map.” Information used for calculating a position is also included in this map. Using this feature map, positioning services can be performed with just one picture taken on a smartphone. The error is much smaller when compared with GPS. Not only that, but it can even accurately measure the direction you’re facing. Uninterrupted positioning is also important We know that the current indoor position can be identified using VL technology. But neither people nor robots just stand still in one place all the time. They move around. Naturally, a precision positioning technology for situations involving movement is crucial. The technology used for this scenario is Visual Intertial Odometry (VIO), which analyzes sensor and video data to track a position. This technology also incorporates the use of an optimization algorithm. This is to enable uninterrupted positioning in real time on a smartphone even with a limited network connection and a low-performance camera. Comparison: (from left) VL alone → VL + VIO → VL + VIO + optimization algorithm Essentially, VL technology tracks one’s current location, and when in motion the real-time position is tracked using VIO with applied optimization engineering. These positioning technologies are used in the Indoor AR Navigation and Indoor Self-driving Robot developed by NAVER LABS. There is one more positioning technology that is useful for Indoor AR Navigation. It is Visual Object Tracking (VOT). This is a technology that can estimate the position or direction of a moving object by 6DoF (six degrees of freedom: forward, back, up, down, left, right) using image recognition technology. In an environment where VL does not function properly or is inaccurate, VOT is used to identify the exact location of an object or add content for specific areas. VOT (visual object tracking) demo The starting point of indoor location-based services: positioning The core context of location-based services is, quite obviously, location. That is why when we say we’re solving the problem of positioning indoors where GPS doesn’t function, it also means that we’re promoting the birth of new services that we have never been able to experience indoors. No longer will we have to struggle to find our way around a big department store when we go for the first time, and robots will also be able to provide services while planning and following routes on their own. AR, expanding space itself as an interface, can also lead to more varied and useful services based on user location. This is the motivation behind our continued research on indoor positioning technologies. > Subscribe to our newsletter
What’s ACROSS NAVER LABS’ ACROSS is a project that has been initiated to develop a crowdsourcing map solution to maintain the recency of HD road maps. Background "An HD map is the most essential piece of data required to enable autonomous driving on the road" Precise HD maps are essential for an autonomous-driving machine. The HD maps allow you to better recognize your current location. The sensors equipped in the machine may sometimes not be enough to do that job. This prior knowledge can be useful when planning a route to drive and predicting which areas will have to be given more attention. Therefore, the importance of an HD map grows greater in a complex large city. That is why NAVER LABS has continued to develop HD Maps with a unique technology called hybrid HD mapping to this day. Hybrid HD mapping is a method where a wide range of road layout information is first obtained through aerial photographs, then it collects and organically combines point cloud data on the road with R1, an independently developed mobile mapping system (MMS). This solution possesses the strength of allowing something on the scale of a large city to be constructed in a more cost-effective manner within a shorter period of time, while, of course, maintaining a high level of precision at the same time. However, there is still something missing. It’s the destiny of all maps. Keeping them up to date. Maps basically reflect reality, but not the present. The time when a map was made will always be in the past. After this time, a new road may have appeared or a new building may have been built. Therefore, an updating solution is directly related to maintaining the precision of a map. (The same is true for the self-updating mapping introduced earlier, which is technology for keeping indoor maps up to date that utilizes robots and AI). Approach "The dilemma of crowdsourcing, a tradeoff between the costs and performance of sensors in the mapping device" That is why hybrid HD mapping technology also requires an updating solution. The ACROSS project is research aimed at developing such a solution. We have selected the crowd sourcing mapping method. Through this method, mapping devices are installed inside multiple vehicles to simultaneously identify changes in road information over a wide scale. We are currently developing a solution that detects and updates changes in the road layout (land information, location of stop lines, road markers, etc.) or 3D information (traffic signs, buildings, traffic lights, streetlights, etc.) through the processing of image data collected by sensors inside mapping devices. However, there remains a dilemma for us to overcome. It is that we have to make mapping devices highly compact with low-cost sensors (cameras, imu, gps). By doing this, they will be able to be equipped in more vehicles and the issues concerning the coverage of detecting changes in an HD map and its cycle can be addressed. However, designing a mapping device with low-cost sensors and processors will inevitably result in performance tradeoffs. In the end, device design to facilitate wide use and algorithm optimization constitute the core of the ACROSS project. To this end, a wide range of technologies developed by NAVER LABS, including sensor fusion, computer vision, image processing, and machine learning are being continuously applied. 5G networks also offer a new opportunity for ACROSS. By using the high bandwidth of 5G, a change in the environment where the map information can be received faster and updated simultaneously has been initiated. Above all, more attempts and choices have become available between the cloud and edge computing in order to achieve optimization between devices and algorithms. Challenge "A world where high-precision 3D data on cities and roads are updated in real time." We expect that there will be many trials and errors along the way towards the success of the ACROSS project. We remain relentless in our efforts to overcome challenges that have not yet been mentioned. However, it is important to remember that these are crucial trials and errors. Throughout such fierce challenges, the core technologies for HD maps and autonomous driving on the road will be ultimately acquired. For this year, we will focus on designing the most ideal mapping device for ACROSS and optimizing algorithms based on such findings. Once this step ends in success, we will attempt to undergo a more diverse set of semantic mapping steps. Autonomous driving machines will form part of our lives in the future. HD maps for these machines will be there first, and then autonomous-driving machines will come with the ability to automatically update HD maps on their own. High-precision 3D data on cities and roads will create an organic, virtuous cycle. Even more precise and even more up-to-date. The ACROSS project is preparing such a world. We will continue to share the progress and achievements of the ACROSS project. > Subscribe to our newsletter
1. 5G is a whole new change beyond fast internet. 5G is the next generation communication technology of 4G, which is commonly called LTE and has the ultra-high speed, hyper connectivity, and ultra-low latency capabilities that are upgraded in all aspects. With fast 5G speeds, network capacity will be virtually unlimited, and latency will be extremely low to the point where almost no latency is felt. Does this only mean improving things that we enjoy right now? Now that the role and importance of mobile networks have greatly increased, 5G is anticipated to bring about a whole new change to mobile communications, surrounding ecosystems, and related industries. Along with technological advances such as Artificial Intelligence and XR, many things that were previously impossible or restricted are being newly attempted with 5G technology. 2. The most powerful use of 5G is with robots. Although Korea has been introducing 5G technology at a fast speed, there are not yet enough cases of using 5G technology effectively. So, currently, the main 5G use cases are only demonstrating large-size video transmissions using a one-directional signal pattern. On the other hand, in order for a robot to move, the role of 5G is essential because it has a “bi-directional signal pattern” that requires constant data exchange between sensors and a high-performance computer. Furthermore, if the communication latency becomes extremely low, we can set up a theory that a high-performance robot control will be enabled even if we separate the robot's computer from the main body. To verify this, NAVER LABS created a robot demo operated by 5G technology for the first time in the world jointly with Qualcomm and successfully demonstrated it at CES. We tried to effectively use and apply the capabilities of 5G. 3. 5G can create a “Brainless” robot that never existed before. The robot demonstration performed at CES with Qualcomm was a 5G brainless robot that succeeded in pulling out the high-performance computer, which acts as the brain of the robot, from the main body. With the use of 5G's low latency, we can separate the robot from the area that acts as the “cerebrum of a human” requiring the greatest processing power. In the 5G era, the MEC server of the communication base stations will act as the cerebrum for controlling the posture and motion of the robot, and the cloud will act as the robot's brain. In other words, we will be able to implement a “brainless” robot in which the 5G connected-cloud acts as a brain. 4. 5G Brainless Robot technology eliminates robots’ physical limitations. Now it has become possible to separate the brain from the robot using 5G technology. So what will be possible in a robot? Until now, a small robot could have a small computer only. It was literally a physical limitation. However, if the cloud can act as the brain of the robot, we can create a highly intelligent robot regardless of the size of the robot. This means a palm-sized robot with the intelligence of a high-performance computer can appear. If 5G brainless robot technology is implemented, we can control multiple service robots simultaneously based on the cloud. Robot algorithms with high difficulty can be provided through the cloud and update is easy, and we can also rationalize the production costs since there is no need to add high-performance processors to each robot. In addition, high-performance processing power can be placed outside, significantly reducing the battery consumption of the robot itself. 5. 5G Brainless Robot technology is a catalyst for the popularization of robot services. One important thing for the popularization of robot services is to maintain the required performance while lowering manufacturing costs. This way, we can lower access hurdles for the commercialization of robot services in various industries and spaces, and accumulate service know-how through various attempts in actual spaces, which will accelerate the popularization. 5G and cloud technology, which can overcome physical limitations and simultaneously control multiple robots with lower power, will be an important solution for the popularization of these service robots. Through CES 2019, NAVER LABS has successfully demonstrated the world's first 5G Brainless Robot with Qualcomm. In MWC19, subsequently, KT, Intel, and NAVER Business Platform have started joint development of a 5G-based service robot. It is a structure to develop service robots with NAVER Business Platform under an ultra-low latency environment using Intel and KT’s 5G solution. Here, the NAVER cloud platform will act as the brain of the robots. Through 5G, the popularization of service robots is approaching much faster than we had imagined. NAVER LABS, starting with 5G brainless robot technology, will continuously produce products that can accelerate the popularization of service robots. > Subscribe to our newsletter
NAVER LABS is taking part in public-private partnerships with 17 public agencies and private enterprises, including the Ministry of Land, Infrastructure and Transport, National Geographic Information Institute, Korea Expressway Corporation and other pertinent enterprises, to build and update its HD map for autonomous driving vehicles. HD maps, while being themselves “sensors” for autonomous driving vehicles, are an essential infrastructure that can act as the “brain” for autonomous driving vehicles. NAVER LABS has recognized the importance of HD maps for autonomous driving vehicles and has been focusing its research on HD maps. Since 2017, NAVER LABS has been testing autonomous driving vehicles on actual roads after being granted a permit for provisional autonomous driving operations by the Ministry of Land, Infrastructure and Transport. NAVER LABS is advancing new autonomous driving solutions, such as the technology to stably ascertain locations in cities with high-rise buildings based on HD maps. In particular, at CES 2019, NAVER LABS introduced its ingenious HD mapping solution called “Hybrid HD Map,” which combines aerial photographs and MMS data. Through an MOU, the Ministry of Land, Infrastructure and Transport will support research and pilot projects to arrange for a joint establishment system, while the Road Management Authority will cooperate in establishing and updating major roads and sections of the pilot project. NAVER LABS has partnered up with many private enterprises as well, and plans to take part in partnerships with many public agencies and private enterprises to build self-driving infrastructures based on research accumulated so far.
You have probably already experienced this a few times: you go to a shop you have not visited in a while and end up having to turn back as it is changed name. In fact, domestic spatial information is said to fluctuate by a degree of more than 30% each year. The world is constantly in flux, even at this very moment. So, in other words, a map that has been recently updated is an accurate map. If map data management is limited to manual management, the update cycle is slow and production costs are steep. Additionally, maintaining the recency of maps is of major concern for map users, as well as online map service providers. Therefore, developing automation technologies for map updates is crucial. For this cause, researchers at NAVER LABS and NAVER LABS Europe have conducted joint research and developed technology for a "self-updating map.” This technology keeps map information up-to-date by recognizing business names that have changed through analysis of large indoor spatial data which is collected by an autonomous driving robot. To achieve this end, Naver’s core technologies, such as robotics, computer vision, and deep learning, are being utilized. Automatically updates changes in signboard using AI and a robot We first conducted tests for this map updating technology with a focus on large shopping malls. Tests were carried out in a space where new stores open and other changes occur frequently. The self-updating map technology picks out only the stores that have changed in a large, complex interior space and contributes data that allows for the automatic and accurate updating of map information. The entire system is organized as follows. First, the autonomous driving robot moves around and collects images and positional information inside the shopping mall. Then, after some time, we take pictures of the same place again. We compare the map and location information of both images to find the same spot, and determine immediately whether any changes have occurred by using deep learning technologies. We have to be careful about distinguishing whether signs are a storefront or a just an advertisement, because shopping malls are spaces with so much exposed information. The algorithm that we developed is capable of accurately recognizing when stores open, close, change, or just change their names in each shopping mall, over a period of time. We have verified that it is suitable for efficiently managing large-scale POI information using computer vision and deep learning technology paired with an autonomous driving robot to maintain the recency of indoor map information. Although autonomous service robots have not yet been popularized, in the near future many people will live in spaces where they interact with robots frequently. Those robots will be able to provide a variety of services including item delivery, security, and guidance, while simultaneously keeping indoor map information up-to-date by utilizing our self-updating map technology. The outcome of joint research between NAVER LABS and NAVER LABS Europe, to be presented at CVPR This technology was jointly developed by researchers at NAVER LABS and NAVER LABS Europe over a period of one year. The results of this research will be presented at the Conference on Computer Vision and Pattern Recognition (CVPR) which is being held in California, USA this coming June, under the title of “Did it change? Learning to detect point-of-interest changes for proactive map updates.” We will be able to attempt a variety of projects in the future based on these results. We can try either to reflect various spatial data such as sales information, other than just business names changes, in real time on a map, or try to recognize and update spatial information changes on roads, i.e. non-indoor spaces. The world is constantly in flux, changing at this very moment. However, technologies are also constantly being developed: technology that will allow us to catch up with these changes. > Subscribe to our newsletter
Meet the wearable robot technology that increases physical strength in real life In 2017, NAVER LABS first introduced an electric cart called AIRCART, which incorporates wearable robot technology to enhance physical movement by increasing strength and endurance. The cart can be worn by workers who move heavy cargo or by people with physical disabilities, to significantly increase their muscle strength or mobility. To apply this technology in areas where it can benefit people in a day-to-day basis, NAVER LABS applied it to a cart, a tool frequently used by many people. AIRCART can easily transport heavy weights even with a light push. It moves easily up the hill, and returns safely down the hill using an automatic break. AIRCART has received a lot of attention, not because of the complexity of the technologies implemented on it, but because of its applicability in the real life. A reference model was actually put to use in a book store, which was followed by technological collaborations for commercialization. Last year, the AIRCART OPENKIT, which incorporates the patented technology and design of AIRCART, was accessible to the public for six months while we began working on projects that aimed to apply the AIRCART technology in different areas. This is one of the results of that process, a wheel chair version of AIRCART. A wheelchair that can be pushed with one hand and allows to make eye contact with the wheelchair user As the society ages, the demand for wheel chairs is increasing every year. One thing that we took notice of was that a large proportion of care givers or guardians of the elderly were also elderly people themselves. Not only that, 40.3% of the people who have used a wheelchair reported having experienced an accident during use (based on the Survey on the Usage Status of Electric Assisting Devices, 2015). These are issues that can be solved by applying the AIRCART technology to wheelchairs. The AIRCART Wheelchair is equipped with the core technology of AIRCART, that is, physical strength and endurance enhancing technology. It is designed to allow anyone to push the wheelchair easily and safely with a small force regardless of the weight of the person who is on the wheelchair. When going down a slope, often a dangerous situation, it automatically maintains a certain speed so that the person pushing the wheelchair does not have to pull from it to keep it from rolling. In case the person loses hold of the wheelchair, AIRCART breaks automatically and stops. <Testing of the wheelchair version of AIRCART – The break is triggered automatically even if the caregiver loses control of the handle> For this project, we had to think beyond simply enhancing physical strength, because of the nature of wheelchairs, which are something that a person would sit on and push around. When you are on a wheelchair, it is difficult to have a conversation while making eye contact with the person who is helping you behind the wheelchair. We wanted to solve this problem of interaction. With AIRCART Wheelchair, the caregiver can move forward while pushing the wheelchair on the side using one hand. It is designed so that it feels as if the caregiver is walking alongside the person on the wheelchair, allowing them to make eye contact, see their facial expressions, and have an interactive conversation. < Testing of the wheelchair version of AIRCART – A caregiver can easily push the wheelchair with one hand while making an eye contact with the person on the wheelchair.> Furthermore, it weighs much less than conventional electric wheelchairs, and has an automatic folding function which allows it to be carried like baggage. It has enhanced safety features for various emergency situations, in addition to the strength enhancement function of AIRCART, such as a vibration prevention and an overturn prevention device. Showcased in the International Conference on Human-Robot Interaction organized by HRI, through a collaboration with the CHIC LAB of Seoul National University’s College of Nursing This project was the outcome of the 6th group of NAVER LABS interns. The work did not only end in the application of a pre-existing technology to a wheelchair in the lab environment, but it also allowed the interns to collaborate with the Consumer Health Informatics & Communication Lab (CHIC Lab) at Seoul Nation University’s College of Nursing, to discover and improve issues related to the real world applications of this technology. Through this process, the need for light weight, portable, realistic and detailed measures to prevent dangerous situations and issues that can be easily unforeseen, such as going over little bumps, were discovered and properly addressed. On March 12, this project was presented at the ACM/IEEE International Conference on Human-Robot Interaction, where 500 experts share integrated research results on human-robot interaction. It attracted a lot of attention, receiving an award in the Student Design Competition category. This is a technology that seeks to understand people at a deeper level, and solve problems in the real world in the most natural way possible. > Subscribe to our newsletter
At CES 2019, NAVER LABS, in collaboration with Qualcomm, demonstrated the “5G Brainless Robot,” being the first in the world to successfully showcase the technology that uploads a high-performance computer, which corresponds to the robot’s brain, to an external cloud. This progressive collaboration with Qualcomm’s brilliant members has created outstanding synergy to successfully carry out challenging tasks. Moreover, NAVER LABS met new partners at MWC19. KT, Intel, NAVER Business Platform and NAVER LABS decided to cooperate with one another based on each other’s respective technology and infrastructure for 5G-based service robots. Such cooperation incorporates KT and Intel to utilize their 5G solutions for ultra-low latency configurations to develop a service robot platform. Through this, NAVER Cloud Platform will serve as the robot’s brain. 5G and cloud technology will be important solutions for the popularization of service robots. Still, there are many more possibilities and challenging projects. In order to make possibilities a reality, NAVER LABS will continue to work with brilliant partners via technological cooperation.
Technology that makes possible the external removal of the brain of self-operating robot When we announced our 5G robot technology at CES 2019, many people assumed that the demonstration would be remote control-based, which is quite a cool piece of technology, in and of itself. However, NAVER LABS has, in collaboration with Qualcomm, gone a step further to tackle a more challenging project, known as the “5G Brainless Robot.” In essence, this technology takes the high-performance computer, functioning as the brain of a self-operating robot, out of the main body of the robot. Despite the initial unfamiliarity of the idea, everyone, at one point or another, will have witnessed something similar in sci-fi films. In the movie, Avengers, for instance, it might not have felt so strange to see the cyborg Chitauri warriors collapse in tandem upon the destruction of the mothership. This idea, in fact, captures the basic essence of what brainless robot technology is. The minute decision-making that empowers the cyborgs to attack Hulk or thwart Thor’s assault are all formulated within the mothership and delivered via a wireless network – most likely 5G or higher. Had the telecommunications been 3G or 4G, a cyborg would have no choice but to helplessly take the full brunt of Captain America’s punches due to the inability to avoid them on account of signal latency, even following the recognition of an imminent attack and the delivery of a command in response. Robots featuring ultra-reliable and low-latency 5G technology Latency simply refers to the time required to give and react to a command. 5G is an ultra-reliable and low-latency communications technology with a latency of merely one millisecond, i.e. 0.001 seconds. It is one of the core technological features of 5G that is attracting significant attention. The ultra-reliable and low-latency characteristics of 5G being applied in a robot’s control cycles can enable some very fascinating possibilities (a control cycle denotes the time required to process signals collected by a sensor and deliver them to a motor). Many humanoid-type robots are typically constituted of more than 100 sensors and 30 motors, and the average cycle, during which data collected from sensors is processed prior to the delivery of commands to a motor, is about 5 milliseconds. However, the latency of 5G communications is a mere 1 millisecond, shorter than that of the control cycle. Thus, it becomes possible to connect a high-performance robot for posture and movement control via communications technology to an outside “brain,” instead of integrating it within the robot. This essentially means that an MEC server, or a 5G Cloud connection, may serve the role of a robot’s brain to actualize a brainless robot. NAVER LAB’s 5G brainless robot technology garnered significant attention at this year’s CES thanks to the successful achievement of high-performance robot control utilizing 5G’s ultra-reliable and low-latency features. Theoretically, it may sound easy, but 5G technology is an area that has yet to be thoroughly explored. In particular, a robot’s high-precision control through a 5G connection is established via countless signals and processing data going back and forth, making its degree of difficulty extremely high. Looking at the pole-balancing demonstration of the robot arm, AMBIDEX, numerous commands to detect a pole with a tilting center of mass while adjusting the arm’s balance are repeatedly delivered through the 5G network. Setting technical difficulties aside, what kinds of possibilities could this technology then be able to provide us with in real life? Advantages to externally relocating the robot brain Members of NAVER LAB’s Robotics Team actively dedicate themselves to the research of robots that ultimately provide services to people. The majority of such robots require the installation of high-performance computers inside the main body frame, which sounds, and actually is, expensive. This is exactly why reducing the cost of production is a prerequisite for the popularization of robots; hence, the Cloud-based service robot platform being a viable solution (NAVER LAB’s AROUND platform, which conducts self-driving indoors based on a map cloud produced by a mapping robot named “M1,” was developed under a similar context). If the ultra-reliable and low-latency performance of 5G is utilized, however, it is possible to separate the processor from the robot, up to the domain that corresponds to a robot’s cerebrum, which requires a significant amount of processing power. Since an external server can control a number of robots simultaneously, each robot does not need to have a high-performance processor embedded inside of it, thereby reducing production and maintenance costs. The reason being is that the cloud can integrate and analyze data collected by several robots, and then conveniently update itself via a newly-learned algorithm. Furthermore, the power consumption of robots would become that much more efficient. The main computer of a robot consumes far too much battery power, similar to that of about 20% of the human body’s entire energy use being devoted to neural activity. The percentage of energy consumed by the main computer can go as high as 40% for self-driving robots. In other words, simply making the high-performance processor external would lead to a remarkable decrease in battery consumption. In fact, the battery charging period acts as a key factor in service robot usage. There is yet another interesting advantage. Would it now be possible to create a small robot with a high-performance computer? In the past, only a small-sized computer could be embedded in a small robot, due to physical limitations; however, if the cloud serves in the role of the robot’s brain, this would open up the possibility of creating super-intelligent robots, regardless of size. Technology popularizing service robots NAVER LABS conducts research in ambient intelligence technology that can reconcile with the physical spaces in which people dwell, while naturally providing information and services. Service robots are a core platform to achieve this, highlighting 5G and Cloud technology’s importance in the popularization of these service robots. CES 2019 provided a platform for the competent and proud engineers of NAVER LABS and Qualcomm to successfully conduct the challenging task of demonstrating the functionality of the world’s first-ever 5G brainless robot. Also, at MWG19, NAVER LABS agreed to commence collaborative efforts in 5G-based service robot development with KT, Intel, and NAVER Business Platform. The goal is to develop service robots by utilizing various 5G solutions offered by Intel, as well as provide robot services under ultra-reliable, low-latency conditions that utilize KT’s 5G telecommunications network and Edge Cloud infrastructure, while empowering the NAVER Cloud Platform to function as the robot brain. We anticipate more progress ahead as the specialized engineers of each company devote their collective energy to actualizing the future of robot technology. It is undoubtedly thrilling to be at the moment where the imaginations of the past are able to be realized by the technology of the present. We aim to passionately collaborate through the best of partnerships so that we can produce something no less than extraordinary. > Subscribe to our newsletter
How close is the future we once imagined? You can find out by visiting this event. This is the Consumer Electronics Show (CES). CES is now the largest technology exhibition in the world. CES 2019 was a special event for NAVER and NAVER LABS because we held our first official booth. We unveiled new technologies that integrated our research results, in areas including robots, autonomous driving, and AI, with 13 new products. Let us introduce the highlights of this exhibition. 5G Brainless Robot: a technology for taking the “brains” of the robot out of its “body” The main topic of this most CES 2019 was 5G. NAVER LABS gave demonstrations for innovative 5G technologies that could be seen in science fiction movies. It is the 5G Brainless Robot technology. This technology is used to pull the high-performance computers, which function as the robot's “brain,” out of the robot’s “body.” Then, an external cloud connected to a 5G network serves as the robot's brain. The reason this technology received special attention at CES is that we, in collaboration with Qualcomm, were able to achieve high-performance robot control, for the first time in the world, which fully utilized 5G's high-performance. The potential futures use of this technology is innumerable. The NAVER data center can function as the brain of service robots working all over the world. Since multiple robots can be simultaneously controlled, there is no need to install a high-performance processor in each robot. It will become easier to integrate and analyze data collected by multiple robots and to update them simultaneously with new algorithms as they are refined. In many ways, this is a key technology for cloud-based robot services. > Learn more about 5G Brainless Robots Hybrid HD Map, a unique HD Map solution for autonomous vehicles A new technology geared towards autonomous vehicles has also been unveiled. NAVER LABS has been researching autonomous vehicles since 2016 and introduced this Hybrid HD Map technology at CES. HD Maps are a critical piece of data for autonomous vehicles. Making good use of an HD Map allows the vehicle to know its current location more accurately and to plan routes safely and effectively. The Hybrid HD Map technology that NAVER LABS has demonstrated is quite novel. Unlike other methods where the HD Map is made elsewhere, NAVER uses aerial photographs taken from airplanes and MMS vehicles together in a two-part process. First, the layout information of the road’s surface is extracted from the aerial photographs. Then, that data is organically combined with data collected by R1, a self-developed MMS (mobile mapping system). It is an effective way to produce a vast HD Map on a city scale more accurately and quickly than ever before. > Learn more about Hybrid HD Maps AROUND G: a robot that drives autonomously without using a laser scanner AROUND G is a robot that guides people through AR in large complex indoor spaces such as shopping malls, airports, and hotels. The indoor autonomous robot is itself no longer a new technology. There were many others at the most recent CES. However, AROUND G has a distinct point of difference. AROUND G does not use an expensive LiDAR (a laser scanner). A laser scanner is a device that perceives a robot’s surroundings through the speed at which light strikes objects and is reflected back. Many autonomous machines use this type of sensor. The problem is that it is expensive. What NAVER LABS has been researching is how to make fluid autonomous driving while only using very cheap camera sensors, not expensive equipment. It is because we believe that such technology is needed in order to popularize autonomous robots. For AROUND G, many of the features required for autonomous robots are handled in a map cloud, and the robot itself is equipped with only low-cost sensors. Even low-cost sensors are sufficient for it to move very fluidly between obstacles and pedestrians because it uses a deep reinforcement learning algorithm. This surprised many companies developing autonomous robots at the recent CES. > Learn more about AROUND G AHEAD: a 3D AR HUD We also drew attention from many automobile manufacturers and electronic parts companies for AHEAD. AHEAD is a 3D AR HUD (head-up display) for vehicles. AHEAD utilizes 3D optical technology that provides information adjusted so it looks like it is on the actual roads right in the driver's natural line of sight. There are many advantages to having the actual road and the display point of information look like they are the same distance. Since the images displayed on AHEAD look as if they actually on the road, the gap that existed between the road that the driver must pay attention to and where they have to look for information on a traditional dashboard is reduced, improving safety. This technology helps solve concerns regarding existing HUDs, where the focal point between the displayed virtual image and the actual road are different which can distract the driver. AHEAD provides information naturally without disturbances and allowing the driver to keep their eyes forward, and can be a new display solution connecting vehicles and information. > Learn more about AHEAD We have also released many other technologies. The future that NAVER LABS has envisioned so far is integrated into a technological vision called ambient intelligence. Ambient Intelligence is a technology that understands user environments beforehand and provides them with the necessary information and services before they even request it. This is the future of NAVER. To this end, we have been researching technology for collecting high-precision data, such as indoor paths and roads, and using it to provide information and services through various robots and computing devices. "You mean to say that all this was developed by NAVER?" This was a question asked by a person who was happy to find the familiar NAVER logo at CES and visited our booth. Perhaps, as much as he was familiar with NAVER, he was also excited and surprised to discover the new technologies that we exhibited. There are still many people we find who are unfamiliar with the fact that NAVER is developing robots and researching autonomous technologies. However, these are the technologies we need to prepare for the future. These key technologies will be mixed into future NAVER services and will provide users with information and services in new ways. That is why they fit the theme of this exhibition, "the possibilities of new connections and discoveries through technology." In addition to the technologies introduced above, you can find more information about exhibits displayed at CES <here>.
Last year, we unveiled the xDM platform for the first time at DEVIEW. The xDM platform is an integrative location and mobility technology that combines other technologies being researched at NAVER LABS, including robot and AI-based HD mapping, location and navigation technologies, and precision data. The aim of the xDM platform is to develop various mobility and space-based services. As part of the effort, we introduced various location-based services and self-driving services through the xDM platform at CES 2019. This includes NAVER LABS’ AR navigation, self-driving vehicle, service robot, and ADAS. Furthermore, today we begin our collaboration with LG Electronics, applying our xDM platform to LG Electronics’ CLOi robot. Applying the xDM platform to robotics, it is possible to render an indoor self-driving technology supporting precise control through the use of only low-cost sensors and low processing power. This is achieved by dividing the required functions and roles, that is, by allocating the map creation task to a mapping robot, and the location identification and route creation tasks to the xDM cloud. Through the partnership with LG Electronics, we intend to amplify the efficiency and precision of the CLOi robot by applying the strength of the xDM platform while perfecting it accuracy as an integrative location and mobility platform by utilizing the newly collected data. NAVER LABS will continue the joint research and development efforts with LG Electronics regarding the application of the xDM platform to other devices. We plan to conduct demonstration projects for performance improvement and optimization, and find new ways to utilize the data collected through the collaborative project between the CLOi robot and the xDM platform. Integrating proprietary technologies of the two companies, we expect new technological innovation to arise from the achievement of a great synergy effect. The ambient intelligence research of NAVER LABS aims to provide useful services that naturally integrate into the daily living space. We intend to develop new services and tools that understand the contexts of everyday life in all spaces you where people reside. Hand in hand with a great partner, we will continue our efforts to realize this vision.
NAVER has proudly unveiled its booth at CES 2019. The booth is located in the Central Plaza of Tech East. See booth location and overview ■ AMBIDEX Demonstration - The World’s first 5G brainless robot AMBIDEX, which uses innovative cable-driven mechanisms, is a robot arm capable of interacting safely with humans. Working together with Qualcomm, NAVER LABS successfully demonstrated the 5G capabilities of AMBIDEX at CES. The advanced technology enables precise control over the robot using the low latency of 5G networks, and does not require high performance processors. ■ AROUND G Demonstration - The culmination of xDM platform technologies AROUND G is an autonomous guide robot that provides guidance in large indoor spaces such as shopping malls, airports and hotels. It is the culmination of technologies being researched under the xDM platform, including HD mapping, visual localization, robotics, AI, and AR navigation. A distinct feature of the robot is that it functions smoothly as an autonomous guide using the deep reinforcement learning algorithm, without having to rely on expensive laser scanners. ■ NAVER LABS’ diverse location & mobility intelligence technologies NAVER’s booth is largely comprised of an indoor section and an outdoor section. This concept mirrors the characteristic of location and mobility intelligence technology, which functions seamlessly across indoor and outdoor environments. The exhibition features NAVER LABS’ key research outcomes, ranging from on-the-road R1 to indoor autonomous robots. See details on exhibits
At CES 2019, NAVER LABS presents its latest location and autonomous mobility intelligence technologies, developed with the goal of achieving ambient intelligence. See booth location and overview ■ xDM Platform eXtended Definition & Dimension Map The xDM platform is an integrated location and mobility solution for smartphones and autonomous machines. It combines our portfolio of robotics, autonomous driving and AI-based technologies such as HD mapping, localization and navigation with high-precision spatial data. Key features are self-updating 3D/HD mapping, precise indoor and outdoor positioning and context-aware location information based on real-time spatial data. The platform solution supports the development of future location-based services such as augmented reality (AR) walking navigation and autonomous mobility services that include autonomous vehicles, service robots and advanced driver-assistance systems (ADAS). ■ Mapping Solutions M1, Indoor Autonomous Mapping Robot M1 is an indoor 3D/HD mapping robot that navigates autonomously in indoor spaces. M1 automatically collects high-resolution images and 3D spatial data via high-performance cameras and LiDAR, significantly improving the efficiency of what was previously a manual mapping process. The resulting HD maps provide spatial data that is essential to location-based services, such as AR walking navigation and indoor autonomous service robots. Self-Updating Map NAVER LABS uses cutting edge AI technologies for advanced research on self-updating maps. The technology uses data collected by indoor autonomous robots and advanced AI solutions developed by experts in robotics, computer vision, deep learning and machine learning. Point of interest (POI) change detection technology detects and updates information on individual stores in large shopping malls. Further research advances on POI attribute recognition and semantic mapping technology will be phased in over the next few years. ■ Autonomous Robots AROUND Platform, Autonomous Service Robot Platform The ambition of the AROUND platform is to commercialize autonomous robot services. The key functions of the autonomous robots are distributed between mapping robots and the xDM cloud. This separation significantly lowers the manufacturing costs. The mapping robot retrieves the spatial data by navigating the indoor environment. The map data is then uploaded to the xDM cloud from where autonomous services are delivered through cloud-based visual localization and path planning. The collision avoidance algorithm that runs on the edge, ensures that the AROUND platform effectively responds to unexpected circumstances and avoids obstacles until the destination has been safely reached. Depending on spatial characteristics and user needs, it can be customized to serve different purposes, from delivering books in a library or store to giving directions in a shopping mall. AROUND G, Autonomous Guide Robot AROUND G is an autonomous guide robot built on the AROUND platform. It provides guidance in large indoor spaces such as shopping malls, airports and hotels, and provides intuitive information through AR navigation. High-precision indoor maps, visual and sensor localization are all serviced over the xDM platform to provide accurate location sensing and to guide users to their destination via the best route. The AR navigation installed in the main unit delivers information on the surrounding space while giving directions. Immersed in its environment, AROUND G creates ambient intelligence whereby users are more engaged by the useful services the robot provides than by the robot itself. ■ Autonomous Driving Hybrid HD Map & R1 Based on our autonomous driving and 3D/HD mapping technology, we’re developing mapping solutions using aerial images and mobile mapping data. 3D mapping technology combines the aerial images and extracts information from the road surfaces. The lightweight mobile mapping system R1 then generates HD maps from point clouds while autonomously on the move. Compared to HD maps obtained with expensive mobile mapping systems, this hybrid HD map solution maintains high accuracy at lower cost. NAVER LABS ADAS CAM The ADAS CAM offers a suite of ADAS functions based on deep learning algorithms. The system relies on only a single camera for forward-collision warning (FCW) and lane-departure warning (LDW). In addition, the integration of hybrid HD map on the xDM platform enables functions of higher precision even in complex environments. ADAS camera modules, developed in-house, accurately gauge road conditions in a variety of circumstances with high dynamic range (HDR) and flicker free functions. ■ NAVER Maps & Wayfinding NAVER Maps & Wayfinding NAVER Maps offers common, everyday services such as location search, public transit information and driving navigation. Users are seamlessly provided with up-to-date information on indoor and outdoor spaces and, over the xDM platform, other innovative services are being developed to meet future needs. Indoor AR Navigation NAVER LABS provides indoor AR navigational information, based on user location and positioning even where there’s no GPS coverage. It utilizes indoor maps created by the mapping robot M1 on the xDM platform, and visual and sensor localization technology. Turn-by-turn directions are given with reference to POIs within the user’s visual range instead of the remaining distance they need to cover. AKI, Location & Geofencing Technology AKI is a smart watch for young children that utilizes location detection, geofencing technology and personalized positioning over the xDM platform. Based on location pattern analysis, AKI provides timely notifications of a child’s location and movements to their parents and guardians. AWAY, In-Vehicle Infotainment Platform AWAY is an infotainment platform for vehicles with a user interface that enhances driver safety and which specifically optimizes music, news and other media services for the driving environment. The AWAY head unit gives drivers simultaneous access to various functions, from media content to navigation, on a wide 24:9 ratio screen that supports split view. The platform has been deployed for vehicles operated by the Korean car sharing company Green Car. AHEAD, 3D AR HUD AHEAD is 3D AR head-up display (HUD) for vehicles. Most HUDs can be distracting for drivers due to the different focal distance between the virtual images and their actual view. Through 3D optical technology, the virtual images projected by AHEAD appear to exist on the road, allowing drivers to effortlessly perceive information. Download AHEAD brochure (PDF) ■ Robotics AMBIDEX, Robot Arm with Innovative Cable-Driven Mechanisms AMBIDEX is a robot arm resulting from collaborative R&D on human-robot coexistence. The arm uses innovative cable-driven mechanisms that make any interaction with humans safe. At just 2.6 kg (5.7 lbs), it weighs less than the average arm of a male adult. AMBIDEX can be operated at a maximum speed of 5 m/s and is capable of carrying up to 3 kg (6.6 lbs). Because AMBIDEX can be controlled to the same extent as an industrial robot, it has a wide range of applications, from simple carrying to performing complex tasks that require precise manipulation and collaboration. AMBIDEX supports high-speed, wireless, real time control from remote locations using the low latency and high throughput of 5G networks. AIRCART, Human-Power Amplification Technology The AIRCART trolley is built on robotics technology that augments human strength. The physical human-robot interaction (pHRI) makes it easy for anyone to shift heavy loads. How the user intends to move AIRCART is captured by a power sensor on the handle so controlling it is intuitive and simple from the start. Equipped with an automatic braking system, accidents are prevented when going up or down a slope. AIRCART is available for use at bookstores and factories.
NAVER is a company creating new ways for people to discover and connect. The information and services we offer are based on contextual understanding, personalization and natural interfaces. To seamlessly integrate these services into diverse life experiences, NAVER LABS is developing innovative technology in robotics, autonomous mobility and location intelligence. Learn more about us in the NAVER exhibition area at CES 2019. ■ About Company NAVER NAVER Co., Ltd. is South Korea’s largest web search engine, as well as a global ICT brand providing services that include LINE messenger, currently with over 200 million users from around the world, the SNOW video app, and the digital comics, NAVER WEBTOON. At the same time, NAVER BAND, a group SNS service, achieved a million MAU. The sustained research and development of AI, robotics, mobility, and other future technology trends are propelling NAVER forth in pursuit of the transformation and innovation of technology platforms, while also devoting itself to a shared growth paradigm together with users from the global community and a vast number of partnerships. In 2018, NAVER was ranked as top 9th most innovative company by Forbes and top 6th Future 50 company by Fortune. NAVER LABS Founded in 2013 as NAVER's research center, NAVER LABS spun off as a separate entity in 2017 to focus its research on ambient intelligence in areas such as autonomous driving, robotics, artificial intelligence and geospatial data. NAVER LABS' mission is to achieve ambient intelligence that enriches user environments with technology that proactively understands and provides them with information and services. In line with this mission, distinguished researchers from Korea and Europe are committed to understanding the places people carry out their lives to connect these locations and shape the future of mobility. ■ About CES CES® is the world's gathering place for all who thrive on the business of consumer technologies. It has served as the proving ground for innovators and breakthrough technologies for 50 years-the global stage where next-generation innovations are introduced to the marketplace. As the largest hands-on event of its kind, CES features all aspects of the industry. CES 2019 will run January 8-11, 2019 in Las Vegas, NV. ■ Booth Location Tech East, LVCC, Central Plaza – CP 14 ■ CES 2019 Innovation Awards Honorees R1, Mobile Mapping System (Vehicle intelligence and self-driving technology) AWAY, In-vehicle Infotainment Platform (In-vehicle audio/video) AHEAD, 3D AR HUD (In-vehicle audio/video) AMBIDEX, Robot Arm with Innovative Cable-Driven Mechanisms (Robotics and drones) ■ Exhibitions Learn more : Introduction of NAVER LABS’ CES 2019 exhibits xDM platform, eXtended Definition & Dimension Map The xDM platform is an integrated location and mobility solution for smartphones and autonomous machines. It combines the NAVER LABS portfolio of robot and AI-based technologies such as high definition (HD) mapping, localization and navigation with high-precision spatial data. Key features are self-updating 3D/HD mapping, precise indoor and outdoor positioning and context-aware location information based on real-time spatial data. The platform solution supports the development of future location-based services such as augmented reality (AR) walking navigation and autonomous mobility services that include autonomous vehicles, service robots and advanced driver-assistance systems (ADAS). Mapping Solutions M1, Indoor Autonomous Mapping Robot Self-Updating Map Autonomous Robots AROUND Platform, Autonomous Service Robot Platform AROUND G, Autonomous Guide Robot Autonomous Driving Hybrid HD Map & R1 ADAS CAM NAVER Maps & Wayfinding Indoor AR navigation AWAY, In-Vehicle Infotainment Platform AKI, Smart Watch for Kids AHEAD, 3D AR HUD Robotics AMBIDEX, Robot Arm with Innovative Cable-Driven Mechanisms AIRCART, Human-Power Amplification Technology ■ Demonstration Schedule (1/8-1/10) AROUND G 11:00 / 13:00 / 15:00 / 17:00 AMBIDEX 11:30 / 13:30 / 15:30 / 17:30 ■ Contact Partnership Proposal firstname.lastname@example.org Media Contacts Ryan Hyeonwoo Lee email@example.com (LINE) hlee293 Dong-keun Han firstname.lastname@example.org (LINE) drake3323
NAVER LABS is to begin a technological collaboration with Qualcomm, a global pioneer in advanced digital wireless communication technologies, products, and services. Starting with a memorandum of understanding with Qualcomm’s parent company, Qualcomm Technology Inc., we are going to proactively integrate various technologies from each company in a range of fields including robotics, self-driving technology, and AR, among others. Through this technological cooperation, NAVER LABS will be able to take its technologies including self-driving, IVI, robotics, precision location, and AR navigation to the next level by utilizing the know-how and solutions that Qualcomm has accumulated during its time as a leader in the global chip market. Not only that, we also expect our research on ambient intelligence to expand as a result of this partnership. Synergy may take place in the form of advancement, but it can also lead to new possibilities that did not exist before. Through the organic cooperation between the two companies, we will begin new stories of technological innovation in places that can be found in our daily lives. We will continue to share the process and outcomes of the promising work.
NAVER LABS is starting a new collaboration with SOCAR. On the 14th, NAVER LABS is signing a partnership with SOCAR to work on the Advanced Driver Assistance System (ADAS) and HD maps based on self-driving vehicle technology. We plan to apply the self-driving technology know-how we have accumulated so far in the form of ADAS to contribute to the safe operation of SOCAR. In addition, we intend to link the xDM platform that we unveiled at DEVEIW 2018 with the vehicles of SOCAR in order to render dynamic maps which show traffic conditions in real time. This will help SOCAR consumers reach their destinations in a safer and faster manner. As it is well known, SOCAR is the biggest car sharing company in Korea, directly operating around 11,000 vehicles. The large-scale data collected by the vehicles operated by SOCAR and the map information will be integrated with the technology owned by NAVER LABS to accelerate the formation of a digital twin ecosystem where real-time information on the road environment will be uploaded directly to the xDM platform. A good collaboration always brings new possibilities. NAVER LABS will continue to build innovative partnerships to develop technologies that have real-life applications, and technologies that directly address problems experienced on a daily basis.
The outcome of NAVER LABS’ research on ambient intelligence has led to its winning of four CES 2019 Innovation Awards. Every year, a judging committee comprised of industry experts including engineers and designers selects exclusive products equipped with excellent technological prowess and competitive designs to present them with the CES Innovation Awards. This year, NAVER LABS participated in three product categories, and four of its products were honored with the prestigious award. AHEAD and AWAY received awards in the in-vehicle audio/video category, while NAVER LABS R1 received an award in the vehicle intelligence and self-driving technology category, and the AMBIDEX was recognized in the robotics and drones category. AHEAD, 3D AR HUD AHEAD is a three-dimensional augmented reality head-up display (3D AR HUD) unveiled for the first time at DEVIEW 2018. Unlike conventional HUD technology, which creates an image at a single focal length, AHEAD provides driving information in the way that is more naturally integrated with the real road environment. It allows drivers to feel like the visual information really exists on the road, and more easily immerse in the various driving information provided, such as navigation instructions, front collision warnings, lane departure warnings, safety distance warnings, and so on. AWAY, in-vehicle infotainment platform AWAY is an infotainment platform for vehicles invented by NAVER LABS. It offers a range of media services optimized for the driving environment, including a UI designed for driver safety, various location-based information systems, an exclusive navigation program with a voice agent that can search destinations, Naver Music and Audio Clip, and so on. One of the defining features of the AWAY head-unit display showcased in the CES this year is the 24:9 split-view system which allows the user to simultaneously enjoy multiple functions such as media content and a navigation system without visual interferences. NAVER LABS R1, mobile mapping system NAVER LABS R1 is a mobile mapping system designed to create a hybrid high definition (HD) map for self-driving vehicles. The hybrid HD maps based on Naver’s proprietary mapping solution are HD maps created by organically integrating the information retrieved from preexisting precision aerial photographs, and the point cloud information collected by an R1 vehicle. Both the 2D and 3D data are processed with a unique algorithm that automatically extracts the features required to draw the HD maps. This reduces the production costs compared to conventional MMS devices while ensuring the same level of accuracy and recency. AMBIDEX, robot arm with innovative cable-driven mechanisms AMBIDEX is a robot arm that can safely interact with people through an innovative cable-driven power transfer mechanism. One single arm of the AMBIDEX barely weighs 2.6kg, which is lighter than a fully-grown male human arm. Despite its light weight, it can withstand 3kg of weight, and operate at a maximum speed of 5m/s. Its strength across the seven joints can be intensified simultaneously, and it can operate with precise control. Being able to develop its operative skills through deep learning, it can provide people with a range of services that directly help them. Starting on 8 January next year, NAVER LABS will be participating in CES 2019, to be held in Las Vegas, USA, where the products that won CES 2019 Innovation Awards will be introduced along with various other achievements in the field of ambient intelligence, including artificial intelligence (AI), self-driving vehicles, robotics, and so on. NAVER LABS hopes to take this opportunity to create new possibilities in the location and mobility sector with partners playing in the global stage.
AROUND G is an indoor self-driving guide robot. It drives autonomously in large-scale indoor spaces, such as shopping malls, airports, hotels, and so forth. When giving directions, it uses AR navigation technology installed in its main display to deliver location and route information in a vivid and immersive way. AROUND G can self-drive smoothly without using an expensive laser scanner device. The key to this is the xDM Cloud of the AROUND Platform, and the deep reinforcement learning algorithm programmed in its main body. The AROUND Platform is a solution that divides the fundamental functions required to achieve a self-driving robot into two parts, a mapping robot, and the xDM Cloud. Firstly, the mapping robot, M1, drives autonomously around indoor spaces to collect spatial data and, then, uploads the collected map data to the xDM Cloud. After this, the service robot utilizes the data processed in the cloud, such as map data, visual localization, path planning, and so on, to drive autonomously. An obstacle avoidance algorithm based on deep reinforcement learning is applied to the robot’s main body. It responds smoothly to spontaneous events which may occur while giving directions. That is to say, this robot can move smoothly to a destination while naturally avoiding pedestrians and other various obstacles that do not exist in the map. Our goal is to establish the use of self-driving service robots in the mainstream. We will be able to more quickly bring about a time where we can see a range of useful self-driving service robots in our daily lives, if we could continue to reduce the production costs of self-driving technology by eliminating expensive laser scanners.
Self-driving vehicles have many sensors. They drive autonomously by processing a vast amount of data collected through those sensors. There is, however, a part of self-driving vehicles that acts as both data and a sensor at the same time: that part is the HD map. There is a reason we can describe an HD map as another sensor on a self-driving vehicle. Self-driving vehicles utilize an HD map, along with other sensor data, to improve the accuracy of its location and for planning routes more effectively and safely. In this sense, an HD map is an essential element for the performance and safety of self-driving vehicles. This is why we are focusing on developing a new solution for precision machine readable HD maps that can be used in self-driving vehicles. The Hybrid HD Mapping technology we have unveiled is truly a unique solution. It is based on the organic integration of large-scale aerial photographs of each city together with data from a mobile mapping system. First, we extract information related to the layout of the road’s surface from aerial images. Then, we organically integrate a point cloud collected by R1, our proprietary lightweight mobile mapping system (MMS), which moves around that space. Compared to conventional HD maps constructed by MMS vehicles, our mapping process can reduce the production costs and lead time significantly. This can all be done while maintaining the degree of accuracy, of course. NAVER LABS is independently studying and developing self-driving vehicles and has attained a temporary permission for it from the Ministry of Land, Infrastructure, and Transport. In this regard, we can develop the Hybrid HD Mapping directly by testing and comparing our research results. We are also actively conducting research on our localization technology that utilizes HD maps. This technology allows self-driving vehicles to identify their current location accurately and safely, even in the densest parts of cities, where GPS signals are easily lost. As more diverse self-driving machines and services are introduced, the importance of HD maps will only increase. More advanced and diversified HD-based algorithms can also be expected to appear. Through the Hybrid HD Mapping technology, we hope to introduce a new HD map solution which satisfies the needs of both maintaining data accuracy and keeping production costs reasonable. See detail of NAVER LABS' autonomous driving technoloies
AHEAD is a three-dimensional augmented reality heads-up display (3D AR HUD). That is to say, it is a 3D display technology that provides information directly to a driver’s natural line of sight. With conventional HUD technology, the focal point of an informative image created by the display is not synchronized with the actual environment of the road, which could negatively affect the driver’s focus. When the driver focuses on the information displayed on a conventional HUD, their view of the road is obscured, and the converse is true as well. In order to address this issue, AHEAD utilizes 3D optical technology that provides information which appears to the driver to be integrated into the actual environment of the road. It also covers short and long distance information. Many benefits can be found when the view of the actual road appears to the driver to be synchronized with the display’s information. An imaged displayed on AHEAD looks like it actually exists on the road, which allows it to deliver information in the most natural manner. Because they do not have to adjust their focal point, the driver is able to maintain their attention and this effectively improves safety. It will also cause less eye fatigue. Furthermore, once it is integrated with precise road and map data, more accurate information will be able to be provided for an even more accurate display. The space inside vehicles and driving environments are very unique. In the future, various information and services will be integrated more and more to assist with driving and improve safety. Within that trend, AHEAD, which delivers information in a precise and safe manner without obscuring the view of the road, will be a new display solution that connects vehicles and information in the most useful and natural way possible. Download the leaflet
It is easy to get lost inside large-scale indoors spaces, like shopping malls. However, GPS does not work inside buildings, so smartphones are useless in these cases. Even if you have a map in hand, there is the problem of knowing your current location. For indoor navigation, we need to construct a precise map of the indoor space, and also develop a technology that will accurately show our current location, without using GPS. In the field test demonstration of indoor AR navigation, conducted at the COEX Mall in Seoul, NAVER LABS utilized a visual localization technology along with data from various sensors to solve the issue finding the current location. It is a technology that analyzes images using smartphone camera to identify the current location. The precise indoor map and location data constructed by the mapping robot M1 were used as the key data points for location and navigation. In addition, for an even more intuitive user experience, we applied a technology that delivers TBT (turn-by-turn) direction information through the AR. Our precision mapping technology and the visual and sensor-fusion localization technology, which utilize robots, have been developed for the purpose of providing directions and information services in indoor spaces while accurately identifying current location without having to construct a separate hardware infrastructure.
In our daily lives, there are still many unsolved problems related to space and movement between spaces. These are the problems that NAVER LABS is concerned about. Through the keynote speech given at DEVIEW 2018, we revealed our past deliberations on these issues and the results of our research. "AI: Not Artificial Intelligence, Ambient Intelligence" This was the talking point of the keynote speech. Ambient intelligence refers to “a technology that provides relevant information or actions in a timely and natural manner by recognizing and understanding the environment and its context,” and this is our technological vision. With this in mind, we unveiled the xDM Platform, an integrated location and mobility solution for people and self-driving machines. “xDM” stands for “extended definition and dimension map.” It is a combination of mapping, localization, and navigation technologies together with all the precision data we have gathered so far. It constructs precise 3D maps for indoor and outdoor environments to be used on smartphones and in self-driving machines and it has rendering technology to automatically update those maps. It offers precise measuring technology that covers indoor, outdoor, and road environments without leaving any blind spots. It also stores real-time and real-space data, generating movement information and understanding contexts. The xDM Platform, which is a combination of the aforementioned technologies, is comprised of two packages. One package is the Wayfinding Platform and it is designed to help people search for their current location and to get directions through indoor and outdoor environments. The other package is the Autonomous Mobility Platform designed for vehicles and self-driving machines. The Wayfinding Platform for People The Wayfinding Platform is a solution that allows people to move in a faster and on more convenient paths. Through a location API, this platform provides detailed location/movement information, such as smart geo-fencing, mobility pattern analysis, personalized localization, and so on, to the user. In addition, the POI information continuously updates through the road/AR navigation API, and it navigates users along the quickest routes in a fast and easy way, even inside large-scale indoor spaces where GPS functions do not work. M1, the mapping robot, can recognize the user’s current location accurately on the 3D rendered indoor map by utilizing visual and sensor-fusion localization technology without the need for separate geolocation infrastructure. It also provides turn-by-turn (TBT) information based on geographic features, and delivers navigation information more intuitively through the AR navigation API. In the keynote address, a demonstration of the AR navigation technology was performed in COEX, Seoul. Also, our plan to collaborate with premier partners, HERE and Incheon Airport Corp., was disclosed. We are waiting for more partners to collaborate with us. We also introduced scalable and semantic indoor mapping (SSIM) that automatically maintains updated indoor maps. It is a technology that automates the indoor map creation process, data collection, and maintenance processes utilizing NAVER LABS’ technologies in robotics, computer vision, visual localization, machine learning, and so on. Currently, we are focusing on the POI change detection stage where a self-driving service robot operating in indoor spaces automatically detects changes in POI, and these changes are updated on the map. In the future, this will be extended to POI recognition and semantic mapping. The same technology will be applied in self-driving technology in outdoor and on road environments. An Autonomous Mobility Platform for Self-Driving Vehicles and Robots These days, mobility solutions do not apply only to people. Soon, self-driving technology for self-driving robots, not to mention self-driving vehicles, will penetrate deeply into our daily lives. The Autonomous Mobility Platform is a solution for self-driving machines. In this keynote address, we unveiled new HD mapping technology for self-driving vehicles. An HD map is essential data required for self-driving vehicles to identify their exact location and to search for the most optimal route to a destination. NAVER LABS utilizes Hybrid HD Map solutions to create HD maps for each city by organically integrating route networks extracted from precision aerial photographs and data collected by R1, NAVER LABS’s mobile mapping system. We are implementing algorithms for both 2D and 3D data that automatically extract the features required for mapping. In addition, based on this HD map, we are developing a solution that can accurately measure locations, even in shadowy areas like city centers where GPS signals cannot reach because of the high rise buildings, by combining the map with information collected through a self-driving vehicle’s GPS sensor, IMU sensor, CAN data, LIDAR signals, and camera visuals. Furthermore, we are collaborating with Qualcomm and Mando on research for ADAS technologies in connection with Hybrid HD Maps, and various other self-driving technologies. The AROUND platform is a solution for bringing self-driving service robots to the mainstream. It utilizes precision 3D maps, created with M1, and cloud-based route search algorithms to reduce the cost of robot production while also maintaining high quality self-driving performance. Unlike conventional self-driving robots which have to perform core functions, such as map creation, location identification, route creation, obstacle avoidance, and more, by themselves, this platform can bring about indoor self-driving with a high-degree of precision with only low-cost sensors and by using a small amount of processing power. Continuing from AROUND, which was used in YES24 book stores last year, we are now developing AROUND G, a self-driving guide robot that provides direction services in large-scale indoor spaces, such as shopping malls or airports. AROUND G will be outfitted with the AR navigation API to offer directions and guide with an even more intuitive UX. Ambient Intelligence Technologies for the Present, Not the Future In this keynote, we presented NAVER LABS’ research outcomes on optical technologies. AHEAD is a 3D AR HUD (Heads-Up Display). It is uses 3D display technology to deliver information to drivers in a way that will not make them shift their focal point. Since the actual view of the road that the driver is watching has the same focal point as the display, the driver can take in location and mobility information more easily and in a more natural way. In the future, various information and services provided by the xDM Platform may be delivered naturally to drivers through AHEAD. We are also working on the sophistication of AMBIDEX, the robot arm we unveiled last year, to make it safer for interaction in daily environments. Unlike conventional robots which are primarily focused on location control, controlling strength is more important for AMBIDEX. For this reason, we have developed a simulator for kinetic and dynamic modeling. By running a simulator test before powering up the robot, we have been able to improve safety and quickly collect a vast range of data for different conditions. NAVER LABS envisions a world where tools and technologies naturally coexist with our everyday life. Our presentation on the performance outcomes and the xDM Platform, through the DEVIEW 2018 keynote address, was part of our effort to realize that vision. We wish to understand the contexts of life in every space in which humanity resides, and to develop new services and tools based on that. We believe technology should understand people, people should not necessarily have to understand technology. NAVER LABS will not stop working towards the realization of this vision, and will continue to grow together with our partners, sharing our technology, and constantly introducing new platforms.
NAVER LABS is developing a search engine based on Foursquare’s point-of-interest (POI) data to provide a global localization service. The strategic partnership uses our natural language processing (NLP) and map service technologies. Foursquare has an enormous amount of global POI data. People from around the world use Foursquare’s service to visit places for different reasons and in different contexts. By adding our know-how and technology, we want to create an advanced POI search engine adapted to each individual’s needs. We also expect to develop new business models combining the data and technology from both companies. NAVER LABS’ conducts research in ambient intelligence. It supports users by providing information through the understanding or their environment and lifestyle which is centered on location and mobility. We see no frontier concerning a user or lifestyle – each is unique. As announced in the partnership with HERE, our collaboration with Foursquare extends our ambient intelligence vision to a global scale, opening the door to new services and technologies.
NAVER LABS has signed a Memo of Understanding with HERE to develop autonomous 3D indoor maps. Key to the creation of these maps is NAVER LABS Scalable & Semantic Indoor Mapping (SSIM) technology. The development of indoor maps relies heavily on human manual work making them not only lengthy and expensive, but also difficult to keep up to date. Our advanced SSIM technology is going to provide an efficient solution to automatically update Points of Interest (POI) in indoor environments where the information changes all the time. The blueprint for autonomous indoor mapping with HERE and SSIM is as follows: A 3D high resolution map is created with the laser scanner and high-performance camera of the mapping robot M1 which moves across the indoor area Data on the indoor space is continuously collected by the AROUND service robot The data AROUND collects is then analyzed by AI technology which detects any changes in the environment and updates the service in real time. We expect this automatic solution to revolutionize how indoor maps are created and maintained. Together with HERE, we’re ahead of the proof of concept of advanced SSIM. Thanks to this project we’ll be maturing the SSIM technology and expect to develop a cornerstone for indoor map construction and the foundation of future innovations.
An image based safe lane change (SLC) algorithm is proposed to aid the lane-change maneuvers for both autonomous driving agents and human drivers. A binary classification (free or blocked) is performed to secure the safety of the ego-vehicle's surroundings before moving to a target lane. For a precise classification, the SLC uses a Convolution Neural Network (ConvNet) that learns image features from large scale dataset. ConvNet is efficient in that is able to extract subtle image features what we haven't been obtained by hand-crafted functions before; however, we also doubt the nature of the ConvNet when those of outcomes are not aligned with our intuition. In fact, we cannot handle anomalous events if we are unenlightened how ConvNet works. We know road environment changes every moment; we therefore cautiously test autonomous driving functions before deploying on the road. In other words, it is essential that understanding the internal mechanisms of the ConvNet to adapt to the autonomous driving systems. From recent weakly-supervised object localization researches, we found a clue how the ConvNet makes decisions. In this article, we would like to introduce Class Activation Mapping (CAM) and analyze where the SLC algorithm sees on images. So, what is the weekly-supervised object localization task? To solve well-defined machine learning problems, supervised learning algorithms require plenty of data points and the corresponding ground truth labels. For an image classification, a dataset consists of images and the keywords that describe the images. On the other hand, to learn a model for object detection task, we need not only the object names but the image coordinates of the objects (see Fig. 1). As a task becomes difficult, we have to consume more time and cost to build a new dataset for supervised learning setups. Thus, researchers look for new methods to apply the existing large scale dataset to different domains. For an example, weakly-supervised object localization attacks object detection task using image classification datasets, where the object localization labels are missing. Fig 1: For an image, ground truth label varies depending on the tasks: examples of the ground truth labels for image classification (left) and those for object detection (right) How to learn a model for image classification? For image classification, the architecture of the most ConvNet can be divided into two parts: convolutional layers to compute image features and fully-connected layers for classification (see Fig. 2). Fig. 2: Image features are computed with convolutional layers, and go through the fully-connected layers for a prediction. Supervised learning algorithms attempt to reduce the differences between the prediction (x) and the ground truth (y) during the training phase. We lose spatial information while reshaping an image feature to input the followed fully-connected layers. In weakly-supervised object localization task, we exploit the interim image features that computed by convolutions and obtain the salient regions for a prediction. Thus, CAM algorithm assumes that the salient regions containing many parts of a certain object will be activated during the classification. More precisely, we explain the CAM algorithm with VGG16 network architecture. The VGG16 generates (512,7,7) size of image features at the last convolution layer when it takes (3,224,224) input image. Suppose the form of the image feature that is a (7,7) sized map having 512 different channels, each channel differently contributes to classification for the given object classes. Thus, CAM algorithms learns the relative importance of the channels at the followed fully-connected layer. Using those weights, we aggregate the feature maps over the channels and finally obtain a saliency map that interprets how does the ConvNet see on the images for a prediction (see Fig. 3) Fig. 3: Since in weakly supervised object localization task, we have no information of the objects locations in the image, we cannot apply the supervised learning regime to learn a model. Instead, CAM algorithm adaptively sums the image features, where the weights are identical to the parameter of the fully-connected layer followed the convolutions. We now see the activated areas where the ConvNet focuses to predict a class. Back to the stories of the autonomous driving research To learn an SLC model, we annotated rear-side view images, which are captured in various road environments, as followed criteria: Blocked if the ego-vehicle cannot physically move to the target lane; Free if the ego-vehicle can move to the target lane; and Undefined for an ambiguous situation such as crosswalk and any other unusual scenes. The annotation rules are akin to human driver’s’ decision making processes for lane-change -- we instantly decide to move a target lane by checking rear-side view mirrors. To tolerate various driving behaviors for building the dataset, we only take a ground truth label when the multiple annotation works agree with the status of the scene. Can the SLC model make a right prediction on the road where it has not been visited? Yes, we can. To examine the generalization performances of the SLC model, we tested images which are not used during the training phase and achieved 96.98% classification accuracies. Using the CAM, we also analyzed that the SLC model has been built on our purpose. We replaced the fully-connected layers of the SLC model with a 512 length of fully-connected layer. While the parameters of the convolution are fixed, we fine-tuned SLC model on the same dataset to obtain saliency maps. As shown in Fig. 4, similar to human drivers, the SLC model looks at the space of the adjacent lanes to judge the probability to succeed lane-change. Fig. 4: The classification result of the SLC model (left), and visualization result using CAM to highlight areas for a prediction (right) The following video was recorded inside of the autonomous driving car running on complex urban road environment, where the results of the perception algorithms are also displayed on the right. The SLC algorithm deployed in the NAVER LABS autonomous driving car secures the safety for lane-change operations. References 1) S.-G. Jeong, J. Kim, S. Kim, and J. Min, End-to-end Learning of Image based Lane-Change Decision, in Proc. IEEE IV’17 2) B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, Learning Deep Features for Discriminative Localization, in Proc. IEEE CVPR’16 3) matcaffe Implementation of class activation mapping: https://github.com/metalbubble/CAM 4) Keras Implementation of class activation mapping: https://github.com/jacobgil/keras-cam
At last year’s DEVIEW, the NAVER LABS robotics team announced the 3D indoor mapping robot, M1. Since then M1 has evolved into the product AROUND which was unveiled at this year’s annual conference. AROUND has been manufactured to increase the popularity of indoor autonomous robots whose high price tag has so far prevented their penetration in the consumer market. By making them more accessible, people will be able to experience a number of indoor autonomous driving robot services in different spaces and environments. The LABS solution distributes the core functions of autonomous driving that constitute a high proportion of the manufacturing costs. Up to now, robots had to produce maps, identify locations, create of routes and avoid obstacles. NAVER LABS has allocated these requirements to different devices that work in tandem. The devices developed by LABS are AROUND, M1 and map cloud. M1 produces the map, map cloud creates the routes and AROUND focusses on accurate autonomous driving and avoiding obstacles using only low-cost sensors and little processing power. The reduction in manufacturing costs will make it possible to mass produce customised, indoor service robots that can assist people in many different places and in many different ways. AROUND is scheduled to operate for the first time at the YES 24 bookstore at the F1963 shopping complex in Busan. AROUND will collect books that customers have finished browsing in its storage unit and move them to a designated place if they exceed a certain weight. From there, employees can collect the books to put them back. The collection solves one of the most tedious chores book store employees have to deal with on a daily basis. As books are computerized in the store, if even a single book is in the wrong place, employees need to check all the surrounding books. AROUND is expected to significantly relieve staff from such painstaking work. AROUND will change the reading experience in book stores because it connects the spaces where books are displayed with where people read them. AROUND will make it possible for people to choose their books and take them to a comfortable place for browsing instead of having to look at them standing up. When they’re done, they simply put them in AROUND who will take them away. The ambient intelligence of AROUND is its integration with the user context and the cultural characteristics of space to create a better experience.
NAVER LABS, an ambient intelligence company specialized in location & mobility announced AKI at DEVIEW 2017. ‘AKI,’ a location and mobility watch device for elementary school children and parents provides safety solutions by recognizing relationships as an important factor. Parents are naturally worried or concerned about their young children when they’re not with them. They’ll often want to know if they‘ve arrived safely at school or who they’re with at different times throughout the day. Children may also need to be reassured that someone will be there to pick them up after school and when. To answer these questions a number of pieces of information need to be gathered including accurate locations and places of where people are. AKI is designed to provide parents with information on where their children are at any time and can alert them when they’re in an unhabitual place or performing unusual activities and movements. AKI utilizes Naver Labs’ own WPS (WiFi positioning system), which provides the exact position even indoors and its automatically controlled, low power location detection recognizes behaviour. It is equipped with personalized Wi-Fi fingerprinting technology. AKI detects the exact location of the child and how the child is moving with an activity detector and movement classifier. It learns the pattern of the child’s daily routine by analyzing the place, time and situation, so that it can alert parents when there is ‘abnormality’ i.e. a place that is not part of daily routine to child's. When the location of a child has been accurately identified, the information can be communicated in a natural, contextualised way. NAVER LABS strives to apply ambient intelligence to mobile user environments. AKI identifies important parts of our lives provided by location-based information. The location of a child is precious information that parents of young children naturally want to have. AKI is equipped with the ambient intelligence philosophy and technology of NAVER LABS and will be available this year.
NAVER LABS has introduced AIRCART at the YES 24 bookstore. The electronic cart delivers books from the warehouse to the store. It was named ‘AIRCART’ because the motor automatically increases its power giving the impression that the cart is gliding, even when carrying heavy objects. Equipped with an automatic breaking system, it’s safe to go up and downhill. As bookstores can be busy places, AIRCART has been designed so that cart users can easily detect if there’s sufficient space in front of the cart to prevent collisions and for the safety of small children. The shelves of the cart are tilted inwards so that more books can be loaded and that they don’t fall out. AIRCART is equipped with physical human-robot interaction (pHRI) technology, a technology used in wearable human power amplifiers. The movement of the cart (momentum and direction) is controlled in real time by identifying the user’s intentions through the power sensor on the cart handle. This makes it easy for anyone to use AIRCART with no prior experience. NAVER LABS research in location and mobility is driven by the desire to provide natural, useful every day services that impact people’s lives and its research in robotics is no exception. AROUND and AIRCART are two examples of technologies that add value to people's lives. The NAVER LABS robotics team will continue collaborating with partners and entrepreneurs so that people can profit from new ambient intelligence services and products.
AMBIDEX is a robot arm that interacts very naturally with humans. It is the fruit of a long-term research project with Korea Tech and, in particular, with professor Yong-Jae Kim, a world leader in the field and a facility equipped with the world's best robotic arm mechanism designing capabilities. Robot arms have a long history in robotic research where they have mainly been developed for manufacturing purposes focused on precision, repetition and heavy-load work. This kind of heavy, bulky robot arm is not well suited to a home setting and could even be considered dangerous. NAVER LABS work in the areas of hardware, control, recognition and intelligence aims at making the robot arm in the home a reality. AMBIDEX, one of the fruits of such research, was unveiled on stage at DEVIEW. AMBIDEX is safe for people to interact with and even lighter than a human arm. AMBIDEX uses cable-driven mechanisms that place all the heavy actuators in the shoulder and body parts. This lightens the arms and means they can be driven with wires. Using innovative mechanisms that enhance the force and strength in each joint, AMBIDEX has achieved the same level of control, performance, and precision as industrial robots. AMBIDEX aims to be a breakthrough robotic hardware solution that can work safely, flexibly and precisely with humans.
At this year’s DEVIEW, a whole range of new ambient intelligence products and technologies were revealed in the NAVER LABS keynote. Ambient intelligence technology detects and understands humans and their contexts to naturally provide information or perform actions at the time of need. During his keynote, Changhyun Song, CEO of NAVER LABS and NAVER CTO, emphasized the motivation behind the ambient intelligence research he leads. “In this world where tools and information are overflowing, technology needs to understand humans and environments even better. The real value of technology will only be realized when it has become part of the fabric of everyday life”. All of the research results shared during the keynote contribute to the NAVER LABS’ vision of ambient intelligence and we will continue to focus on technology, products and services that directly impact people. NAVER LABS envisions a future where people and society are not restricted by tools and technology. It is a world where people can focus on the things they value most in life and where ambient intelligence helps them do so.
AWAY is an infotainment platform for vehicles with a user interface that enhances driver safety and which specifically optimizes music, news and other media services for the driving environment. The AWAY head unit gives drivers simultaneous access to various functions, from media content to navigation, on a wide 24:9 ratio screen that supports split view. AWAY has been deployed for vehicles operated by the Korean car sharing company Green Car. Green Car plans to install AWAY in 3,000 vehicles within the year.
NAVER Corporation and Xerox Corporation today announced an agreement for NAVER to acquire the Xerox Research Centre Europe in Grenoble, France. The French Works Council’s consultation on this project has now been completed and the agreement is expected to close in the third quarter, subject to fulfillment of certain customary conditions. Once the sale becomes final, all 80 plus researchers and administrative staff are expected to become part of NAVER LABS. Based in Seongnam, South Korea, NAVER is Korea’s leading Internet company, operating the nation’s top search portal “NAVER,” and other innovative services in the global market such as the mobile messenger LINE, video messenger SNOW and community app BAND. And NAVER LABS is an ambient intelligence company that develops future technologies including autonomous driving, robotics and artificial intelligence. Since its establishment as NAVER’s R&D division in 2013, it has led NAVER’s innovation in technology through products such as ‘Papago’, AI-based translation app, Whale, the omni-tasking web browser, and M1, the 3D indoor mapping robot. Founded in 1993, the Xerox Research Centre Europe is located just outside Grenoble, often dubbed the Silicon Valley of Europe. The centre has focused its research in artificial intelligence (AI), machine learning, computer vision, natural language processing and ethnography. “The research expertise at the European centre is perfectly aligned with NAVER LABS’. We expect immediate, powerful synergies” said Chang-hyeon Song, CEO of NAVER LABS, and CTO of NAVER. “XRCE's world class R&D achievements in AI technology, including computer vision and machine learning, will significantly strengthen NAVER LABS’ research in ‘ambient intelligence’ including autonomous vehicles, AI/deep learning, intelligent 3D mapping, robotics and natural language processing.” With such a strong foothold in Europe, NAVER LABS expects to considerably accelerate its development of ambient intelligence technologies around the globe and in particular in AI. NAVER LABS Europe hompage
The autonomous vehicle developed by NAVER LABS was the first in South Korea's IT industry to receive a temporary operating permit from the Ministry of Land, Infrastructure and Transport in February 2017. This allowed us to add to our autonomous driving technologies by combining data on actual driving conditions with the deep learning technologies that we had already amassed. In the future, we are planning to develop safer and more convenient mobility solutions by conducting research into additional autonomous driving technologies. We will also continue to turn numerous possibilities created by the connection of cars and data into safety and convenience on actual roads.
M1 is an indoor 3D/HD mapping robot that navigates autonomously in indoor spaces. M1 automatically collects high-resolution images and 3D spatial data via high-performance cameras and LiDAR, significantly improving the efficiency of what was previously a manual mapping process. The resulting HD maps provide spatial data that is essential to location-based services.
Company overview Founded in 2013 as NAVER's research center, NAVER LABS spun off as a separate entity in 2017 to focus its research on ambient intelligence in areas such as autonomous driving, robotics, artificial intelligence and geospatial data. NAVER LABS' mission is to achieve ambient intelligence that enriches user environments with technology that proactively understands and provides them with information and services. In line with this mission, distinguished researchers from Korea and Europe are committed to understanding the places people carry out their lives to connect these locations and shape the future of mobility. Corporate media contents [Video] NAVER LABS, an Ambient Intelligence company [Video] NAVER LABS Intelligence in Mobility concept [Video] NAVER LABS Robot M1 [Video] NAVER LABS Space & Mobility Interview [Video] NAVER LABS M1 3D indoor mapping process [Video] NAVER LABS IVI (In-vehicle infotainment) [Video] NAVER LABS AROUND indoor robot [Video] NAVER LABS AMBIDEX robotic arm [Video] NAVER LABS AIRCART power secsitive cart Corporate media channel Web site Facebook Instagram Youtube SlideShare Behance