NAVER LABS' indoor dataset is the result of scanning COEX, one of the largest shopping malls in Korea, twice at an interval of about two months (Jun. 2018 and Sep. 2018). This dataset consists of 17.5K geo-localized images with 578 points of interest (POIs) captured by a device called Pumpkin that has two LiDARs and multiple cameras. We currently only provide images taken by Pumpkin’s left and right side cameras, which are designed to capture storefront images that can be used for POI recognition and change detection tasks. In the near future, we will be releasing other images taken by other camera types as well, so this dataset will also allow use for VSLAM and visual localization research. Downloads COEX POI Change Detection dataset Scanning device: Pumpkin Pumpkin is equipped with the following main sensors: Cameras: 6 x Sony RX0 (2 with Wide Angle Lens: Samyang Fisheye Lens), 2400x1600, 2Hz, Anti-Distortion Shutter — 1/32000 super-high-speed shutter, ZEISS Tessar T* Lens, 84° FoV (Samyang Fisheye Lens: 106° HFoV, 70° VFoV) LIDAR: 1 x Velodyne Puck 16-channels Lidar, 360° HFoV, 30° VFoV, 4 planes, 10 Hz, 100m range, 0.1~0.4° Vertical resolution, 2.0° Horizontal resolution, Sensor Location Data format This dataset consists of images and their poses. The name of each image includes the serial number and timestamp as '[serial #]_[timestamp].jpg'. The poses where all images are acquired are in a separate file, 'sensor_trajectory.hdf'. In this file, 7-degrees-of-freedom (DoF) poses for all of the images are recorded. 7-DoF states are 'x, y, z' for position and 'qw, qx, qy, qz' for orientation, in serial order. In addition, each of the two tabs, pose and stamp, are paired, and the pose for the n-th stamp is the n-th in the pose tab. If you are more familiar with '.json' than '.hdf', you can download the file to convert it. How to generate data Data acquisition All of the images of this dataset were acquired by Pumpkin. To collect as much data as possible, we acquired images periodically and without stopping instead of by stop-and-go motion. As referred above, because RX0 has an anti-distortion shutter, we assumed that there is no distortion by movement. All of the data including point clouds and images were recorded based on the same timeline under the UNIX timestamp of the main processor. Estimating image pose For accurate pose estimation when each image was acquired, LiDAR-based SLAM was performed. However, since the acquisition from LiDAR and cameras didn't happen at the same time (i.e. asynchronized), linear interpolation based on timestamp gave the pose of Pumpkin when the image had been acquired. The pose of each image could be calculated from the relationship between the base of Pumpkin and each camera, and the pose was tagged for each image. Blurring To publish the dataset, we blurred faces in images with our object detection model. The model was trained by the data from Naver Street View, which includes face annotation. We ran the model on our images to localize the faces, and applied a median filter to blur the objects. The remaining faces that the model failed to localize were handled manually.
What’s ACROSS NAVER LABS’ ACROSS is a project that has been initiated to develop a crowdsourcing map solution to maintain the recency of HD road maps. Background "An HD map is the most essential piece of data required to enable autonomous driving on the road" Precise HD maps are essential for an autonomous-driving machine. The HD maps allow you to better recognize your current location. The sensors equipped in the machine may sometimes not be enough to do that job. This prior knowledge can be useful when planning a route to drive and predicting which areas will have to be given more attention. Therefore, the importance of an HD map grows greater in a complex large city. That is why NAVER LABS has continued to develop HD Maps with a unique technology called hybrid HD mapping to this day. Hybrid HD mapping is a method where a wide range of road layout information is first obtained through aerial photographs, then it collects and organically combines point cloud data on the road with R1, an independently developed mobile mapping system (MMS). This solution possesses the strength of allowing something on the scale of a large city to be constructed in a more cost-effective manner within a shorter period of time, while, of course, maintaining a high level of precision at the same time. However, there is still something missing. It’s the destiny of all maps. Keeping them up to date. Maps basically reflect reality, but not the present. The time when a map was made will always be in the past. After this time, a new road may have appeared or a new building may have been built. Therefore, an updating solution is directly related to maintaining the precision of a map. (The same is true for the self-updating mapping introduced earlier, which is technology for keeping indoor maps up to date that utilizes robots and AI). Approach "The dilemma of crowdsourcing, a tradeoff between the costs and performance of sensors in the mapping device" That is why hybrid HD mapping technology also requires an updating solution. The ACROSS project is research aimed at developing such a solution. We have selected the crowd sourcing mapping method. Through this method, mapping devices are installed inside multiple vehicles to simultaneously identify changes in road information over a wide scale. We are currently developing a solution that detects and updates changes in the road layout (land information, location of stop lines, road markers, etc.) or 3D information (traffic signs, buildings, traffic lights, streetlights, etc.) through the processing of image data collected by sensors inside mapping devices. However, there remains a dilemma for us to overcome. It is that we have to make mapping devices highly compact with low-cost sensors (cameras, imu, gps). By doing this, they will be able to be equipped in more vehicles and the issues concerning the coverage of detecting changes in an HD map and its cycle can be addressed. However, designing a mapping device with low-cost sensors and processors will inevitably result in performance tradeoffs. In the end, device design to facilitate wide use and algorithm optimization constitute the core of the ACROSS project. To this end, a wide range of technologies developed by NAVER LABS, including sensor fusion, computer vision, image processing, and machine learning are being continuously applied. 5G networks also offer a new opportunity for ACROSS. By using the high bandwidth of 5G, a change in the environment where the map information can be received faster and updated simultaneously has been initiated. Above all, more attempts and choices have become available between the cloud and edge computing in order to achieve optimization between devices and algorithms. Challenge "A world where high-precision 3D data on cities and roads are updated in real time." We expect that there will be many trials and errors along the way towards the success of the ACROSS project. We remain relentless in our efforts to overcome challenges that have not yet been mentioned. However, it is important to remember that these are crucial trials and errors. Throughout such fierce challenges, the core technologies for HD maps and autonomous driving on the road will be ultimately acquired. For this year, we will focus on designing the most ideal mapping device for ACROSS and optimizing algorithms based on such findings. Once this step ends in success, we will attempt to undergo a more diverse set of semantic mapping steps. Autonomous driving machines will form part of our lives in the future. HD maps for these machines will be there first, and then autonomous-driving machines will come with the ability to automatically update HD maps on their own. High-precision 3D data on cities and roads will create an organic, virtuous cycle. Even more precise and even more up-to-date. The ACROSS project is preparing such a world. We will continue to share the progress and achievements of the ACROSS project.
1. 5G is a whole new change beyond fast internet. 5G is the next generation communication technology of 4G, which is commonly called LTE and has the ultra-high speed, hyper connectivity, and ultra-low latency capabilities that are upgraded in all aspects. With fast 5G speeds, network capacity will be virtually unlimited, and latency will be extremely low to the point where almost no latency is felt. Does this only mean improving things that we enjoy right now? Now that the role and importance of mobile networks have greatly increased, 5G is anticipated to bring about a whole new change to mobile communications, surrounding ecosystems, and related industries. Along with technological advances such as Artificial Intelligence and XR, many things that were previously impossible or restricted are being newly attempted with 5G technology. 2. The most powerful use of 5G is with robots. Although Korea has been introducing 5G technology at a fast speed, there are not yet enough cases of using 5G technology effectively. So, currently, the main 5G use cases are only demonstrating large-size video transmissions using a one-directional signal pattern. On the other hand, in order for a robot to move, the role of 5G is essential because it has a “bi-directional signal pattern” that requires constant data exchange between sensors and a high-performance computer. Furthermore, if the communication latency becomes extremely low, we can set up a theory that a high-performance robot control will be enabled even if we separate the robot's computer from the main body. To verify this, NAVER LABS created a robot demo operated by 5G technology for the first time in the world jointly with Qualcomm and successfully demonstrated it at CES. We tried to effectively use and apply the capabilities of 5G. 3. 5G can create a “Brainless” robot that never existed before. The robot demonstration performed at CES with Qualcomm was a 5G brainless robot that succeeded in pulling out the high-performance computer, which acts as the brain of the robot, from the main body. With the use of 5G's low latency, we can separate the robot from the area that acts as the “cerebrum of a human” requiring the greatest processing power. In the 5G era, the MEC server of the communication base stations will act as the cerebrum for controlling the posture and motion of the robot, and the cloud will act as the robot's brain. In other words, we will be able to implement a “brainless” robot in which the 5G connected-cloud acts as a brain. 4. 5G Brainless Robot technology eliminates robots’ physical limitations. Now it has become possible to separate the brain from the robot using 5G technology. So what will be possible in a robot? Until now, a small robot could have a small computer only. It was literally a physical limitation. However, if the cloud can act as the brain of the robot, we can create a highly intelligent robot regardless of the size of the robot. This means a palm-sized robot with the intelligence of a high-performance computer can appear. If 5G brainless robot technology is implemented, we can control multiple service robots simultaneously based on the cloud. Robot algorithms with high difficulty can be provided through the cloud and update is easy, and we can also rationalize the production costs since there is no need to add high-performance processors to each robot. In addition, high-performance processing power can be placed outside, significantly reducing the battery consumption of the robot itself. 5. 5G Brainless Robot technology is a catalyst for the popularization of robot services. One important thing for the popularization of robot services is to maintain the required performance while lowering manufacturing costs. This way, we can lower access hurdles for the commercialization of robot services in various industries and spaces, and accumulate service know-how through various attempts in actual spaces, which will accelerate the popularization. 5G and cloud technology, which can overcome physical limitations and simultaneously control multiple robots with lower power, will be an important solution for the popularization of these service robots. Through CES 2019, NAVER LABS has successfully demonstrated the world's first 5G Brainless Robot with Qualcomm. In MWC19, subsequently, KT, Intel, and NAVER Business Platform have started joint development of a 5G-based service robot. It is a structure to develop service robots with NAVER Business Platform under an ultra-low latency environment using Intel and KT’s 5G solution. Here, the NAVER cloud platform will act as the brain of the robots. Through 5G, the popularization of service robots is approaching much faster than we had imagined. NAVER LABS, starting with 5G brainless robot technology, will continuously produce products that can accelerate the popularization of service robots.
You have probably already experienced this a few times: you go to a shop you have not visited in a while and end up having to turn back as it is changed name. In fact, domestic spatial information is said to fluctuate by a degree of more than 30% each year. The world is constantly in flux, even at this very moment. So, in other words, a map that has been recently updated is an accurate map. If map data management is limited to manual management, the update cycle is slow and production costs are steep. Additionally, maintaining the recency of maps is of major concern for map users, as well as online map service providers. Therefore, developing automation technologies for map updates is crucial. For this cause, researchers at NAVER LABS and NAVER LABS Europe have conducted joint research and developed technology for a "self-updating map.” This technology keeps map information up-to-date by recognizing business names that have changed through analysis of large indoor spatial data which is collected by an autonomous driving robot. To achieve this end, Naver’s core technologies, such as robotics, computer vision, and deep learning, are being utilized. Automatically updates changes in signboard using AI and a robot We first conducted tests for this map updating technology with a focus on large shopping malls. Tests were carried out in a space where new stores open and other changes occur frequently. The self-updating map technology picks out only the stores that have changed in a large, complex interior space and contributes data that allows for the automatic and accurate updating of map information. The entire system is organized as follows. First, the autonomous driving robot moves around and collects images and positional information inside the shopping mall. Then, after some time, we take pictures of the same place again. We compare the map and location information of both images to find the same spot, and determine immediately whether any changes have occurred by using deep learning technologies. We have to be careful about distinguishing whether signs are a storefront or a just an advertisement, because shopping malls are spaces with so much exposed information. The algorithm that we developed is capable of accurately recognizing when stores open, close, change, or just change their names in each shopping mall, over a period of time. We have verified that it is suitable for efficiently managing large-scale POI information using computer vision and deep learning technology paired with an autonomous driving robot to maintain the recency of indoor map information. Although autonomous service robots have not yet been popularized, in the near future many people will live in spaces where they interact with robots frequently. Those robots will be able to provide a variety of services including item delivery, security, and guidance, while simultaneously keeping indoor map information up-to-date by utilizing our self-updating map technology. The outcome of joint research between NAVER LABS and NAVER LABS Europe, to be presented at CVPR This technology was jointly developed by researchers at NAVER LABS and NAVER LABS Europe over a period of one year. The results of this research will be presented at the Conference on Computer Vision and Pattern Recognition (CVPR) which is being held in California, USA this coming June, under the title of “Did it change? Learning to detect point-of-interest changes for proactive map updates.” We will be able to attempt a variety of projects in the future based on these results. We can try either to reflect various spatial data such as sales information, other than just business names changes, in real time on a map, or try to recognize and update spatial information changes on roads, i.e. non-indoor spaces. The world is constantly in flux, changing at this very moment. However, technologies are also constantly being developed: technology that will allow us to catch up with these changes.
Meet the wearable robot technology that increases physical strength in real life In 2017, NAVER LABS first introduced an electric cart called AIRCART, which incorporates wearable robot technology to enhance physical movement by increasing strength and endurance. The cart can be worn by workers who move heavy cargo or by people with physical disabilities, to significantly increase their muscle strength or mobility. To apply this technology in areas where it can benefit people in a day-to-day basis, NAVER LABS applied it to a cart, a tool frequently used by many people. AIRCART can easily transport heavy weights even with a light push. It moves easily up the hill, and returns safely down the hill using an automatic break. AIRCART has received a lot of attention, not because of the complexity of the technologies implemented on it, but because of its applicability in the real life. A reference model was actually put to use in a book store, which was followed by technological collaborations for commercialization. Last year, the AIRCART OPENKIT, which incorporates the patented technology and design of AIRCART, was accessible to the public for six months while we began working on projects that aimed to apply the AIRCART technology in different areas. This is one of the results of that process, a wheel chair version of AIRCART. A wheelchair that can be pushed with one hand and allows to make eye contact with the wheelchair user As the society ages, the demand for wheel chairs is increasing every year. One thing that we took notice of was that a large proportion of care givers or guardians of the elderly were also elderly people themselves. Not only that, 40.3% of the people who have used a wheelchair reported having experienced an accident during use (based on the Survey on the Usage Status of Electric Assisting Devices, 2015). These are issues that can be solved by applying the AIRCART technology to wheelchairs. The AIRCART Wheelchair is equipped with the core technology of AIRCART, that is, physical strength and endurance enhancing technology. It is designed to allow anyone to push the wheelchair easily and safely with a small force regardless of the weight of the person who is on the wheelchair. When going down a slope, often a dangerous situation, it automatically maintains a certain speed so that the person pushing the wheelchair does not have to pull from it to keep it from rolling. In case the person loses hold of the wheelchair, AIRCART breaks automatically and stops. <Testing of the wheelchair version of AIRCART – The break is triggered automatically even if the caregiver loses control of the handle> For this project, we had to think beyond simply enhancing physical strength, because of the nature of wheelchairs, which are something that a person would sit on and push around. When you are on a wheelchair, it is difficult to have a conversation while making eye contact with the person who is helping you behind the wheelchair. We wanted to solve this problem of interaction. With AIRCART Wheelchair, the caregiver can move forward while pushing the wheelchair on the side using one hand. It is designed so that it feels as if the caregiver is walking alongside the person on the wheelchair, allowing them to make eye contact, see their facial expressions, and have an interactive conversation. < Testing of the wheelchair version of AIRCART – A caregiver can easily push the wheelchair with one hand while making an eye contact with the person on the wheelchair.> Furthermore, it weighs much less than conventional electric wheelchairs, and has an automatic folding function which allows it to be carried like baggage. It has enhanced safety features for various emergency situations, in addition to the strength enhancement function of AIRCART, such as a vibration prevention and an overturn prevention device. Showcased in the International Conference on Human-Robot Interaction organized by HRI, through a collaboration with the CHIC LAB of Seoul National University’s College of Nursing This project was the outcome of the 6th group of NAVER LABS interns. The work did not only end in the application of a pre-existing technology to a wheelchair in the lab environment, but it also allowed the interns to collaborate with the Consumer Health Informatics & Communication Lab (CHIC Lab) at Seoul Nation University’s College of Nursing, to discover and improve issues related to the real world applications of this technology. Through this process, the need for light weight, portable, realistic and detailed measures to prevent dangerous situations and issues that can be easily unforeseen, such as going over little bumps, were discovered and properly addressed. On March 12, this project was presented at the ACM/IEEE International Conference on Human-Robot Interaction, where 500 experts share integrated research results on human-robot interaction. It attracted a lot of attention, receiving an award in the Student Design Competition category. This is a technology that seeks to understand people at a deeper level, and solve problems in the real world in the most natural way possible.
Technology that makes possible the external removal of the brain of self-operating robot When we announced our 5G robot technology at CES 2019, many people assumed that the demonstration would be remote control-based, which is quite a cool piece of technology, in and of itself. However, NAVER LABS has, in collaboration with Qualcomm, gone a step further to tackle a more challenging project, known as the “5G Brainless Robot.” In essence, this technology takes the high-performance computer, functioning as the brain of a self-operating robot, out of the main body of the robot. Despite the initial unfamiliarity of the idea, everyone, at one point or another, will have witnessed something similar in sci-fi films. In the movie, Avengers, for instance, it might not have felt so strange to see the cyborg Chitauri warriors collapse in tandem upon the destruction of the mothership. This idea, in fact, captures the basic essence of what brainless robot technology is. The minute decision-making that empowers the cyborgs to attack Hulk or thwart Thor’s assault are all formulated within the mothership and delivered via a wireless network – most likely 5G or higher. Had the telecommunications been 3G or 4G, a cyborg would have no choice but to helplessly take the full brunt of Captain America’s punches due to the inability to avoid them on account of signal latency, even following the recognition of an imminent attack and the delivery of a command in response. Robots featuring ultra-reliable and low-latency 5G technology Latency simply refers to the time required to give and react to a command. 5G is an ultra-reliable and low-latency communications technology with a latency of merely one millisecond, i.e. 0.001 seconds. It is one of the core technological features of 5G that is attracting significant attention. The ultra-reliable and low-latency characteristics of 5G being applied in a robot’s control cycles can enable some very fascinating possibilities (a control cycle denotes the time required to process signals collected by a sensor and deliver them to a motor). Many humanoid-type robots are typically constituted of more than 100 sensors and 30 motors, and the average cycle, during which data collected from sensors is processed prior to the delivery of commands to a motor, is about 5 milliseconds. However, the latency of 5G communications is a mere 1 millisecond, shorter than that of the control cycle. Thus, it becomes possible to connect a high-performance robot for posture and movement control via communications technology to an outside “brain,” instead of integrating it within the robot. This essentially means that an MEC server, or a 5G Cloud connection, may serve the role of a robot’s brain to actualize a brainless robot. NAVER LAB’s 5G brainless robot technology garnered significant attention at this year’s CES thanks to the successful achievement of high-performance robot control utilizing 5G’s ultra-reliable and low-latency features. Theoretically, it may sound easy, but 5G technology is an area that has yet to be thoroughly explored. In particular, a robot’s high-precision control through a 5G connection is established via countless signals and processing data going back and forth, making its degree of difficulty extremely high. Looking at the pole-balancing demonstration of the robot arm, AMBIDEX, numerous commands to detect a pole with a tilting center of mass while adjusting the arm’s balance are repeatedly delivered through the 5G network. Setting technical difficulties aside, what kinds of possibilities could this technology then be able to provide us with in real life? Advantages to externally relocating the robot brain Members of NAVER LAB’s Robotics Team actively dedicate themselves to the research of robots that ultimately provide services to people. The majority of such robots require the installation of high-performance computers inside the main body frame, which sounds, and actually is, expensive. This is exactly why reducing the cost of production is a prerequisite for the popularization of robots; hence, the Cloud-based service robot platform being a viable solution (NAVER LAB’s AROUND platform, which conducts self-driving indoors based on a map cloud produced by a mapping robot named “M1,” was developed under a similar context). If the ultra-reliable and low-latency performance of 5G is utilized, however, it is possible to separate the processor from the robot, up to the domain that corresponds to a robot’s cerebrum, which requires a significant amount of processing power. Since an external server can control a number of robots simultaneously, each robot does not need to have a high-performance processor embedded inside of it, thereby reducing production and maintenance costs. The reason being is that the cloud can integrate and analyze data collected by several robots, and then conveniently update itself via a newly-learned algorithm. Furthermore, the power consumption of robots would become that much more efficient. The main computer of a robot consumes far too much battery power, similar to that of about 20% of the human body’s entire energy use being devoted to neural activity. The percentage of energy consumed by the main computer can go as high as 40% for self-driving robots. In other words, simply making the high-performance processor external would lead to a remarkable decrease in battery consumption. In fact, the battery charging period acts as a key factor in service robot usage. There is yet another interesting advantage. Would it now be possible to create a small robot with a high-performance computer? In the past, only a small-sized computer could be embedded in a small robot, due to physical limitations; however, if the cloud serves in the role of the robot’s brain, this would open up the possibility of creating super-intelligent robots, regardless of size. Technology popularizing service robots NAVER LABS conducts research in ambient intelligence technology that can reconcile with the physical spaces in which people dwell, while naturally providing information and services. Service robots are a core platform to achieve this, highlighting 5G and Cloud technology’s importance in the popularization of these service robots. CES 2019 provided a platform for the competent and proud engineers of NAVER LABS and Qualcomm to successfully conduct the challenging task of demonstrating the functionality of the world’s first-ever 5G brainless robot. Also, at MWG19, NAVER LABS agreed to commence collaborative efforts in 5G-based service robot development with KT, Intel, and NAVER Business Platform. The goal is to develop service robots by utilizing various 5G solutions offered by Intel, as well as provide robot services under ultra-reliable, low-latency conditions that utilize KT’s 5G telecommunications network and Edge Cloud infrastructure, while empowering the NAVER Cloud Platform to function as the robot brain. We anticipate more progress ahead as the specialized engineers of each company devote their collective energy to actualizing the future of robot technology. It is undoubtedly thrilling to be at the moment where the imaginations of the past are able to be realized by the technology of the present. We aim to passionately collaborate through the best of partnerships so that we can produce something no less than extraordinary.
How close is the future we once imagined? You can find out by visiting this event. This is the Consumer Electronics Show (CES). CES is now the largest technology exhibition in the world. CES 2019 was a special event for NAVER and NAVER LABS because we held our first official booth. We unveiled new technologies that integrated our research results, in areas including robots, autonomous driving, and AI, with 13 new products. Let us introduce the highlights of this exhibition. 5G Brainless Robot: a technology for taking the “brains” of the robot out of its “body” The main topic of this most CES 2019 was 5G. NAVER LABS gave demonstrations for innovative 5G technologies that could be seen in science fiction movies. It is the 5G Brainless Robot technology. This technology is used to pull the high-performance computers, which function as the robot's “brain,” out of the robot’s “body.” Then, an external cloud connected to a 5G network serves as the robot's brain. The reason this technology received special attention at CES is that we, in collaboration with Qualcomm, were able to achieve high-performance robot control, for the first time in the world, which fully utilized 5G's high-performance. The potential futures use of this technology is innumerable. The NAVER data center can function as the brain of service robots working all over the world. Since multiple robots can be simultaneously controlled, there is no need to install a high-performance processor in each robot. It will become easier to integrate and analyze data collected by multiple robots and to update them simultaneously with new algorithms as they are refined. In many ways, this is a key technology for cloud-based robot services. > Learn more about 5G Brainless Robots Hybrid HD Map, a unique HD Map solution for autonomous vehicles A new technology geared towards autonomous vehicles has also been unveiled. NAVER LABS has been researching autonomous vehicles since 2016 and introduced this Hybrid HD Map technology at CES. HD Maps are a critical piece of data for autonomous vehicles. Making good use of an HD Map allows the vehicle to know its current location more accurately and to plan routes safely and effectively. The Hybrid HD Map technology that NAVER LABS has demonstrated is quite novel. Unlike other methods where the HD Map is made elsewhere, NAVER uses aerial photographs taken from airplanes and MMS vehicles together in a two-part process. First, the layout information of the road’s surface is extracted from the aerial photographs. Then, that data is organically combined with data collected by R1, a self-developed MMS (mobile mapping system). It is an effective way to produce a vast HD Map on a city scale more accurately and quickly than ever before. > Learn more about Hybrid HD Maps AROUND G: a robot that drives autonomously without using a laser scanner AROUND G is a robot that guides people through AR in large complex indoor spaces such as shopping malls, airports, and hotels. The indoor autonomous robot is itself no longer a new technology. There were many others at the most recent CES. However, AROUND G has a distinct point of difference. AROUND G does not use an expensive LiDAR (a laser scanner). A laser scanner is a device that perceives a robot’s surroundings through the speed at which light strikes objects and is reflected back. Many autonomous machines use this type of sensor. The problem is that it is expensive. What NAVER LABS has been researching is how to make fluid autonomous driving while only using very cheap camera sensors, not expensive equipment. It is because we believe that such technology is needed in order to popularize autonomous robots. For AROUND G, many of the features required for autonomous robots are handled in a map cloud, and the robot itself is equipped with only low-cost sensors. Even low-cost sensors are sufficient for it to move very fluidly between obstacles and pedestrians because it uses a deep reinforcement learning algorithm. This surprised many companies developing autonomous robots at the recent CES. > Learn more about AROUND G AHEAD: a 3D AR HUD We also drew attention from many automobile manufacturers and electronic parts companies for AHEAD. AHEAD is a 3D AR HUD (head-up display) for vehicles. AHEAD utilizes 3D optical technology that provides information adjusted so it looks like it is on the actual roads right in the driver's natural line of sight. There are many advantages to having the actual road and the display point of information look like they are the same distance. Since the images displayed on AHEAD look as if they actually on the road, the gap that existed between the road that the driver must pay attention to and where they have to look for information on a traditional dashboard is reduced, improving safety. This technology helps solve concerns regarding existing HUDs, where the focal point between the displayed virtual image and the actual road are different which can distract the driver. AHEAD provides information naturally without disturbances and allowing the driver to keep their eyes forward, and can be a new display solution connecting vehicles and information. > Learn more about AHEAD We have also released many other technologies. The future that NAVER LABS has envisioned so far is integrated into a technological vision called ambient intelligence. Ambient Intelligence is a technology that understands user environments beforehand and provides them with the necessary information and services before they even request it. This is the future of NAVER. To this end, we have been researching technology for collecting high-precision data, such as indoor paths and roads, and using it to provide information and services through various robots and computing devices. "You mean to say that all this was developed by NAVER?" This was a question asked by a person who was happy to find the familiar NAVER logo at CES and visited our booth. Perhaps, as much as he was familiar with NAVER, he was also excited and surprised to discover the new technologies that we exhibited. There are still many people we find who are unfamiliar with the fact that NAVER is developing robots and researching autonomous technologies. However, these are the technologies we need to prepare for the future. These key technologies will be mixed into future NAVER services and will provide users with information and services in new ways. That is why they fit the theme of this exhibition, "the possibilities of new connections and discoveries through technology." In addition to the technologies introduced above, you can find more information about exhibits displayed at CES <here>.
Last year, we unveiled the xDM platform for the first time at DEVIEW. The xDM platform is an integrative location and mobility technology that combines other technologies being researched at NAVER LABS, including robot and AI-based HD mapping, location and navigation technologies, and precision data. The aim of the xDM platform is to develop various mobility and space-based services. As part of the effort, we introduced various location-based services and self-driving services through the xDM platform at CES 2019. This includes NAVER LABS’ AR navigation, self-driving vehicle, service robot, and ADAS. Furthermore, today we begin our collaboration with LG Electronics, applying our xDM platform to LG Electronics’ CLOi robot. Applying the xDM platform to robotics, it is possible to render an indoor self-driving technology supporting precise control through the use of only low-cost sensors and low processing power. This is achieved by dividing the required functions and roles, that is, by allocating the map creation task to a mapping robot, and the location identification and route creation tasks to the xDM cloud. Through the partnership with LG Electronics, we intend to amplify the efficiency and precision of the CLOi robot by applying the strength of the xDM platform while perfecting it accuracy as an integrative location and mobility platform by utilizing the newly collected data. NAVER LABS will continue the joint research and development efforts with LG Electronics regarding the application of the xDM platform to other devices. We plan to conduct demonstration projects for performance improvement and optimization, and find new ways to utilize the data collected through the collaborative project between the CLOi robot and the xDM platform. Integrating proprietary technologies of the two companies, we expect new technological innovation to arise from the achievement of a great synergy effect. The ambient intelligence research of NAVER LABS aims to provide useful services that naturally integrate into the daily living space. We intend to develop new services and tools that understand the contexts of everyday life in all spaces you where people reside. Hand in hand with a great partner, we will continue our efforts to realize this vision.
NAVER has proudly unveiled its booth at CES 2019. The booth is located in the Central Plaza of Tech East. See booth location and overview ■ AMBIDEX Demonstration - The World’s first 5G brainless robot AMBIDEX, which uses innovative cable-driven mechanisms, is a robot arm capable of interacting safely with humans. Working together with Qualcomm, NAVER LABS successfully demonstrated the 5G capabilities of AMBIDEX at CES. The advanced technology enables precise control over the robot using the low latency of 5G networks, and does not require high performance processors. ■ AROUND G Demonstration - The culmination of xDM platform technologies AROUND G is an autonomous guide robot that provides guidance in large indoor spaces such as shopping malls, airports and hotels. It is the culmination of technologies being researched under the xDM platform, including HD mapping, visual localization, robotics, AI, and AR navigation. A distinct feature of the robot is that it functions smoothly as an autonomous guide using the deep reinforcement learning algorithm, without having to rely on expensive laser scanners. ■ NAVER LABS’ diverse location & mobility intelligence technologies NAVER’s booth is largely comprised of an indoor section and an outdoor section. This concept mirrors the characteristic of location and mobility intelligence technology, which functions seamlessly across indoor and outdoor environments. The exhibition features NAVER LABS’ key research outcomes, ranging from on-the-road R1 to indoor autonomous robots. See details on exhibits
At CES 2019, NAVER LABS presents its latest location and autonomous mobility intelligence technologies, developed with the goal of achieving ambient intelligence. See booth location and overview ■ xDM Platform eXtended Definition & Dimension Map The xDM platform is an integrated location and mobility solution for smartphones and autonomous machines. It combines our portfolio of robotics, autonomous driving and AI-based technologies such as HD mapping, localization and navigation with high-precision spatial data. Key features are self-updating 3D/HD mapping, precise indoor and outdoor positioning and context-aware location information based on real-time spatial data. The platform solution supports the development of future location-based services such as augmented reality (AR) walking navigation and autonomous mobility services that include autonomous vehicles, service robots and advanced driver-assistance systems (ADAS). ■ Mapping Solutions M1, Indoor Autonomous Mapping Robot M1 is an indoor 3D/HD mapping robot that navigates autonomously in indoor spaces. M1 automatically collects high-resolution images and 3D spatial data via high-performance cameras and LiDAR, significantly improving the efficiency of what was previously a manual mapping process. The resulting HD maps provide spatial data that is essential to location-based services, such as AR walking navigation and indoor autonomous service robots. Self-Updating Map NAVER LABS uses cutting edge AI technologies for advanced research on self-updating maps. The technology uses data collected by indoor autonomous robots and advanced AI solutions developed by experts in robotics, computer vision, deep learning and machine learning. Point of interest (POI) change detection technology detects and updates information on individual stores in large shopping malls. Further research advances on POI attribute recognition and semantic mapping technology will be phased in over the next few years. ■ Autonomous Robots AROUND Platform, Autonomous Service Robot Platform The ambition of the AROUND platform is to commercialize autonomous robot services. The key functions of the autonomous robots are distributed between mapping robots and the xDM cloud. This separation significantly lowers the manufacturing costs. The mapping robot retrieves the spatial data by navigating the indoor environment. The map data is then uploaded to the xDM cloud from where autonomous services are delivered through cloud-based visual localization and path planning. The collision avoidance algorithm that runs on the edge, ensures that the AROUND platform effectively responds to unexpected circumstances and avoids obstacles until the destination has been safely reached. Depending on spatial characteristics and user needs, it can be customized to serve different purposes, from delivering books in a library or store to giving directions in a shopping mall. AROUND G, Autonomous Guide Robot AROUND G is an autonomous guide robot built on the AROUND platform. It provides guidance in large indoor spaces such as shopping malls, airports and hotels, and provides intuitive information through AR navigation. High-precision indoor maps, visual and sensor localization are all serviced over the xDM platform to provide accurate location sensing and to guide users to their destination via the best route. The AR navigation installed in the main unit delivers information on the surrounding space while giving directions. Immersed in its environment, AROUND G creates ambient intelligence whereby users are more engaged by the useful services the robot provides than by the robot itself. ■ Autonomous Driving Hybrid HD Map & R1 Based on our autonomous driving and 3D/HD mapping technology, we’re developing mapping solutions using aerial images and mobile mapping data. 3D mapping technology combines the aerial images and extracts information from the road surfaces. The lightweight mobile mapping system R1 then generates HD maps from point clouds while autonomously on the move. Compared to HD maps obtained with expensive mobile mapping systems, this hybrid HD map solution maintains high accuracy at lower cost. NAVER LABS ADAS CAM The ADAS CAM offers a suite of ADAS functions based on deep learning algorithms. The system relies on only a single camera for forward-collision warning (FCW) and lane-departure warning (LDW). In addition, the integration of hybrid HD map on the xDM platform enables functions of higher precision even in complex environments. ADAS camera modules, developed in-house, accurately gauge road conditions in a variety of circumstances with high dynamic range (HDR) and flicker free functions. ■ NAVER Maps & Wayfinding NAVER Maps & Wayfinding NAVER Maps offers common, everyday services such as location search, public transit information and driving navigation. Users are seamlessly provided with up-to-date information on indoor and outdoor spaces and, over the xDM platform, other innovative services are being developed to meet future needs. Indoor AR Navigation NAVER LABS provides indoor AR navigational information, based on user location and positioning even where there’s no GPS coverage. It utilizes indoor maps created by the mapping robot M1 on the xDM platform, and visual and sensor localization technology. Turn-by-turn directions are given with reference to POIs within the user’s visual range instead of the remaining distance they need to cover. AKI, Location & Geofencing Technology AKI is a smart watch for young children that utilizes location detection, geofencing technology and personalized positioning over the xDM platform. Based on location pattern analysis, AKI provides timely notifications of a child’s location and movements to their parents and guardians. AWAY, In-Vehicle Infotainment Platform AWAY is an infotainment platform for vehicles with a user interface that enhances driver safety and which specifically optimizes music, news and other media services for the driving environment. The AWAY head unit gives drivers simultaneous access to various functions, from media content to navigation, on a wide 24:9 ratio screen that supports split view. The platform has been deployed for vehicles operated by the Korean car sharing company Green Car. AHEAD, 3D AR HUD AHEAD is 3D AR head-up display (HUD) for vehicles. Most HUDs can be distracting for drivers due to the different focal distance between the virtual images and their actual view. Through 3D optical technology, the virtual images projected by AHEAD appear to exist on the road, allowing drivers to effortlessly perceive information. Download AHEAD brochure (PDF) ■ Robotics AMBIDEX, Robot Arm with Innovative Cable-Driven Mechanisms AMBIDEX is a robot arm resulting from collaborative R&D on human-robot coexistence. The arm uses innovative cable-driven mechanisms that make any interaction with humans safe. At just 2.6 kg (5.7 lbs), it weighs less than the average arm of a male adult. AMBIDEX can be operated at a maximum speed of 5 m/s and is capable of carrying up to 3 kg (6.6 lbs). Because AMBIDEX can be controlled to the same extent as an industrial robot, it has a wide range of applications, from simple carrying to performing complex tasks that require precise manipulation and collaboration. AMBIDEX supports high-speed, wireless, real time control from remote locations using the low latency and high throughput of 5G networks. AIRCART, Human-Power Amplification Technology The AIRCART trolley is built on robotics technology that augments human strength. The physical human-robot interaction (pHRI) makes it easy for anyone to shift heavy loads. How the user intends to move AIRCART is captured by a power sensor on the handle so controlling it is intuitive and simple from the start. Equipped with an automatic braking system, accidents are prevented when going up or down a slope. AIRCART is available for use at bookstores and factories.
NAVER is a company creating new ways for people to discover and connect. The information and services we offer are based on contextual understanding, personalization and natural interfaces. To seamlessly integrate these services into diverse life experiences, NAVER LABS is developing innovative technology in robotics, autonomous mobility and location intelligence. Learn more about us in the NAVER exhibition area at CES 2019. ■ About Company NAVER NAVER Co., Ltd. is South Korea’s largest web search engine, as well as a global ICT brand providing services that include LINE messenger, currently with over 200 million users from around the world, the SNOW video app, and the digital comics, NAVER WEBTOON. At the same time, NAVER BAND, a group SNS service, achieved a million MAU. The sustained research and development of AI, robotics, mobility, and other future technology trends are propelling NAVER forth in pursuit of the transformation and innovation of technology platforms, while also devoting itself to a shared growth paradigm together with users from the global community and a vast number of partnerships. In 2018, NAVER was ranked as top 9th most innovative company by Forbes and top 6th Future 50 company by Fortune. NAVER LABS Founded in 2013 as NAVER's research center, NAVER LABS spun off as a separate entity in 2017 to focus its research on ambient intelligence in areas such as autonomous driving, robotics, artificial intelligence and geospatial data. NAVER LABS' mission is to achieve ambient intelligence that enriches user environments with technology that proactively understands and provides them with information and services. In line with this mission, distinguished researchers from Korea and Europe are committed to understanding the places people carry out their lives to connect these locations and shape the future of mobility. ■ About CES CES® is the world's gathering place for all who thrive on the business of consumer technologies. It has served as the proving ground for innovators and breakthrough technologies for 50 years-the global stage where next-generation innovations are introduced to the marketplace. As the largest hands-on event of its kind, CES features all aspects of the industry. CES 2019 will run January 8-11, 2019 in Las Vegas, NV. ■ Booth Location Tech East, LVCC, Central Plaza – CP 14 ■ CES 2019 Innovation Awards Honorees R1, Mobile Mapping System (Vehicle intelligence and self-driving technology) AWAY, In-vehicle Infotainment Platform (In-vehicle audio/video) AHEAD, 3D AR HUD (In-vehicle audio/video) AMBIDEX, Robot Arm with Innovative Cable-Driven Mechanisms (Robotics and drones) ■ Exhibitions Learn more : Introduction of NAVER LABS’ CES 2019 exhibits xDM platform, eXtended Definition & Dimension Map The xDM platform is an integrated location and mobility solution for smartphones and autonomous machines. It combines the NAVER LABS portfolio of robot and AI-based technologies such as high definition (HD) mapping, localization and navigation with high-precision spatial data. Key features are self-updating 3D/HD mapping, precise indoor and outdoor positioning and context-aware location information based on real-time spatial data. The platform solution supports the development of future location-based services such as augmented reality (AR) walking navigation and autonomous mobility services that include autonomous vehicles, service robots and advanced driver-assistance systems (ADAS). Mapping Solutions M1, Indoor Autonomous Mapping Robot Self-Updating Map Autonomous Robots AROUND Platform, Autonomous Service Robot Platform AROUND G, Autonomous Guide Robot Autonomous Driving Hybrid HD Map & R1 ADAS CAM NAVER Maps & Wayfinding Indoor AR navigation AWAY, In-Vehicle Infotainment Platform AKI, Smart Watch for Kids AHEAD, 3D AR HUD Robotics AMBIDEX, Robot Arm with Innovative Cable-Driven Mechanisms AIRCART, Human-Power Amplification Technology ■ Demonstration Schedule (1/8-1/10) AROUND G 11:00 / 13:00 / 15:00 / 17:00 AMBIDEX 11:30 / 13:30 / 15:30 / 17:30 ■ Contact Partnership Proposal email@example.com Media Contacts Ryan Hyeonwoo Lee firstname.lastname@example.org (LINE) hlee293 Dong-keun Han email@example.com (LINE) drake3323
NAVER LABS is to begin a technological collaboration with Qualcomm, a global pioneer in advanced digital wireless communication technologies, products, and services. Starting with a memorandum of understanding with Qualcomm’s parent company, Qualcomm Technology Inc., we are going to proactively integrate various technologies from each company in a range of fields including robotics, self-driving technology, and AR, among others. Through this technological cooperation, NAVER LABS will be able to take its technologies including self-driving, IVI, robotics, precision location, and AR navigation to the next level by utilizing the know-how and solutions that Qualcomm has accumulated during its time as a leader in the global chip market. Not only that, we also expect our research on ambient intelligence to expand as a result of this partnership. Synergy may take place in the form of advancement, but it can also lead to new possibilities that did not exist before. Through the organic cooperation between the two companies, we will begin new stories of technological innovation in places that can be found in our daily lives. We will continue to share the process and outcomes of the promising work.
NAVER LABS is starting a new collaboration with SOCAR. On the 14th, NAVER LABS is signing a partnership with SOCAR to work on the Advanced Driver Assistance System (ADAS) and HD maps based on self-driving vehicle technology. We plan to apply the self-driving technology know-how we have accumulated so far in the form of ADAS to contribute to the safe operation of SOCAR. In addition, we intend to link the xDM platform that we unveiled at DEVEIW 2018 with the vehicles of SOCAR in order to render dynamic maps which show traffic conditions in real time. This will help SOCAR consumers reach their destinations in a safer and faster manner. As it is well known, SOCAR is the biggest car sharing company in Korea, directly operating around 11,000 vehicles. The large-scale data collected by the vehicles operated by SOCAR and the map information will be integrated with the technology owned by NAVER LABS to accelerate the formation of a digital twin ecosystem where real-time information on the road environment will be uploaded directly to the xDM platform. A good collaboration always brings new possibilities. NAVER LABS will continue to build innovative partnerships to develop technologies that have real-life applications, and technologies that directly address problems experienced on a daily basis.
The outcome of NAVER LABS’ research on ambient intelligence has led to its winning of four CES 2019 Innovation Awards. Every year, a judging committee comprised of industry experts including engineers and designers selects exclusive products equipped with excellent technological prowess and competitive designs to present them with the CES Innovation Awards. This year, NAVER LABS participated in three product categories, and four of its products were honored with the prestigious award. AHEAD and AWAY received awards in the in-vehicle audio/video category, while NAVER LABS R1 received an award in the vehicle intelligence and self-driving technology category, and the AMBIDEX was recognized in the robotics and drones category. AHEAD, 3D AR HUD AHEAD is a three-dimensional augmented reality head-up display (3D AR HUD) unveiled for the first time at DEVIEW 2018. Unlike conventional HUD technology, which creates an image at a single focal length, AHEAD provides driving information in the way that is more naturally integrated with the real road environment. It allows drivers to feel like the visual information really exists on the road, and more easily immerse in the various driving information provided, such as navigation instructions, front collision warnings, lane departure warnings, safety distance warnings, and so on. AWAY, in-vehicle infotainment platform AWAY is an infotainment platform for vehicles invented by NAVER LABS. It offers a range of media services optimized for the driving environment, including a UI designed for driver safety, various location-based information systems, an exclusive navigation program with a voice agent that can search destinations, Naver Music and Audio Clip, and so on. One of the defining features of the AWAY head-unit display showcased in the CES this year is the 24:9 split-view system which allows the user to simultaneously enjoy multiple functions such as media content and a navigation system without visual interferences. NAVER LABS R1, mobile mapping system NAVER LABS R1 is a mobile mapping system designed to create a hybrid high definition (HD) map for self-driving vehicles. The hybrid HD maps based on Naver’s proprietary mapping solution are HD maps created by organically integrating the information retrieved from preexisting precision aerial photographs, and the point cloud information collected by an R1 vehicle. Both the 2D and 3D data are processed with a unique algorithm that automatically extracts the features required to draw the HD maps. This reduces the production costs compared to conventional MMS devices while ensuring the same level of accuracy and recency. AMBIDEX, robot arm with innovative cable-driven mechanisms AMBIDEX is a robot arm that can safely interact with people through an innovative cable-driven power transfer mechanism. One single arm of the AMBIDEX barely weighs 2.6kg, which is lighter than a fully-grown male human arm. Despite its light weight, it can withstand 3kg of weight, and operate at a maximum speed of 5m/s. Its strength across the seven joints can be intensified simultaneously, and it can operate with precise control. Being able to develop its operative skills through deep learning, it can provide people with a range of services that directly help them. Starting on 8 January next year, NAVER LABS will be participating in CES 2019, to be held in Las Vegas, USA, where the products that won CES 2019 Innovation Awards will be introduced along with various other achievements in the field of ambient intelligence, including artificial intelligence (AI), self-driving vehicles, robotics, and so on. NAVER LABS hopes to take this opportunity to create new possibilities in the location and mobility sector with partners playing in the global stage.
AROUND G is an indoor self-driving guide robot. It drives autonomously in large-scale indoor spaces, such as shopping malls, airports, hotels, and so forth. When giving directions, it uses AR navigation technology installed in its main display to deliver location and route information in a vivid and immersive way. AROUND G can self-drive smoothly without using an expensive laser scanner device. The key to this is the xDM Cloud of the AROUND Platform, and the deep reinforcement learning algorithm programmed in its main body. The AROUND Platform is a solution that divides the fundamental functions required to achieve a self-driving robot into two parts, a mapping robot, and the xDM Cloud. Firstly, the mapping robot, M1, drives autonomously around indoor spaces to collect spatial data and, then, uploads the collected map data to the xDM Cloud. After this, the service robot utilizes the data processed in the cloud, such as map data, visual localization, path planning, and so on, to drive autonomously. An obstacle avoidance algorithm based on deep reinforcement learning is applied to the robot’s main body. It responds smoothly to spontaneous events which may occur while giving directions. That is to say, this robot can move smoothly to a destination while naturally avoiding pedestrians and other various obstacles that do not exist in the map. Our goal is to establish the use of self-driving service robots in the mainstream. We will be able to more quickly bring about a time where we can see a range of useful self-driving service robots in our daily lives, if we could continue to reduce the production costs of self-driving technology by eliminating expensive laser scanners.
Self-driving vehicles have many sensors. They drive autonomously by processing a vast amount of data collected through those sensors. There is, however, a part of self-driving vehicles that acts as both data and a sensor at the same time: that part is the HD map. There is a reason we can describe an HD map as another sensor on a self-driving vehicle. Self-driving vehicles utilize an HD map, along with other sensor data, to improve the accuracy of its location and for planning routes more effectively and safely. In this sense, an HD map is an essential element for the performance and safety of self-driving vehicles. This is why we are focusing on developing a new solution for precision machine readable HD maps that can be used in self-driving vehicles. The Hybrid HD Mapping technology we have unveiled is truly a unique solution. It is based on the organic integration of large-scale aerial photographs of each city together with data from a mobile mapping system. First, we extract information related to the layout of the road’s surface from aerial images. Then, we organically integrate a point cloud collected by R1, our proprietary lightweight mobile mapping system (MMS), which moves around that space. Compared to conventional HD maps constructed by MMS vehicles, our mapping process can reduce the production costs and lead time significantly. This can all be done while maintaining the degree of accuracy, of course. NAVER LABS is independently studying and developing self-driving vehicles and has attained a temporary permission for it from the Ministry of Land, Infrastructure, and Transport. In this regard, we can develop the Hybrid HD Mapping directly by testing and comparing our research results. We are also actively conducting research on our localization technology that utilizes HD maps. This technology allows self-driving vehicles to identify their current location accurately and safely, even in the densest parts of cities, where GPS signals are easily lost. As more diverse self-driving machines and services are introduced, the importance of HD maps will only increase. More advanced and diversified HD-based algorithms can also be expected to appear. Through the Hybrid HD Mapping technology, we hope to introduce a new HD map solution which satisfies the needs of both maintaining data accuracy and keeping production costs reasonable.
AHEAD is a three-dimensional augmented reality heads-up display (3D AR HUD). That is to say, it is a 3D display technology that provides information directly to a driver’s natural line of sight. With conventional HUD technology, the focal point of an informative image created by the display is not synchronized with the actual environment of the road, which could negatively affect the driver’s focus. When the driver focuses on the information displayed on a conventional HUD, their view of the road is obscured, and the converse is true as well. In order to address this issue, AHEAD utilizes 3D optical technology that provides information which appears to the driver to be integrated into the actual environment of the road. It also covers short and long distance information. Many benefits can be found when the view of the actual road appears to the driver to be synchronized with the display’s information. An imaged displayed on AHEAD looks like it actually exists on the road, which allows it to deliver information in the most natural manner. Because they do not have to adjust their focal point, the driver is able to maintain their attention and this effectively improves safety. It will also cause less eye fatigue. Furthermore, once it is integrated with precise road and map data, more accurate information will be able to be provided for an even more accurate display. The space inside vehicles and driving environments are very unique. In the future, various information and services will be integrated more and more to assist with driving and improve safety. Within that trend, AHEAD, which delivers information in a precise and safe manner without obscuring the view of the road, will be a new display solution that connects vehicles and information in the most useful and natural way possible. Download the leaflet
It is easy to get lost inside large-scale indoors spaces, like shopping malls. However, GPS does not work inside buildings, so smartphones are useless in these cases. Even if you have a map in hand, there is the problem of knowing your current location. For indoor navigation, we need to construct a precise map of the indoor space, and also develop a technology that will accurately show our current location, without using GPS. In the field test demonstration of indoor AR navigation, conducted at the COEX Mall in Seoul, NAVER LABS utilized a visual localization technology along with data from various sensors to solve the issue finding the current location. It is a technology that analyzes images using smartphone camera to identify the current location. The precise indoor map and location data constructed by the mapping robot M1 were used as the key data points for location and navigation. In addition, for an even more intuitive user experience, we applied a technology that delivers TBT (turn-by-turn) direction information through the AR. Our precision mapping technology and the visual and sensor-fusion localization technology, which utilize robots, have been developed for the purpose of providing directions and information services in indoor spaces while accurately identifying current location without having to construct a separate hardware infrastructure.
In our daily lives, there are still many unsolved problems related to space and movement between spaces. These are the problems that NAVER LABS is concerned about. Through the keynote speech given at DEVIEW 2018, we revealed our past deliberations on these issues and the results of our research. "AI: Not Artificial Intelligence, Ambient Intelligence" This was the talking point of the keynote speech. Ambient intelligence refers to “a technology that provides relevant information or actions in a timely and natural manner by recognizing and understanding the environment and its context,” and this is our technological vision. With this in mind, we unveiled the xDM Platform, an integrated location and mobility solution for people and self-driving machines. “xDM” stands for “extended definition and dimension map.” It is a combination of mapping, localization, and navigation technologies together with all the precision data we have gathered so far. It constructs precise 3D maps for indoor and outdoor environments to be used on smartphones and in self-driving machines and it has rendering technology to automatically update those maps. It offers precise measuring technology that covers indoor, outdoor, and road environments without leaving any blind spots. It also stores real-time and real-space data, generating movement information and understanding contexts. The xDM Platform, which is a combination of the aforementioned technologies, is comprised of two packages. One package is the Wayfinding Platform and it is designed to help people search for their current location and to get directions through indoor and outdoor environments. The other package is the Autonomous Mobility Platform designed for vehicles and self-driving machines. The Wayfinding Platform for People The Wayfinding Platform is a solution that allows people to move in a faster and on more convenient paths. Through a location API, this platform provides detailed location/movement information, such as smart geo-fencing, mobility pattern analysis, personalized localization, and so on, to the user. In addition, the POI information continuously updates through the road/AR navigation API, and it navigates users along the quickest routes in a fast and easy way, even inside large-scale indoor spaces where GPS functions do not work. M1, the mapping robot, can recognize the user’s current location accurately on the 3D rendered indoor map by utilizing visual and sensor-fusion localization technology without the need for separate geolocation infrastructure. It also provides turn-by-turn (TBT) information based on geographic features, and delivers navigation information more intuitively through the AR navigation API. In the keynote address, a demonstration of the AR navigation technology was performed in COEX, Seoul. Also, our plan to collaborate with premier partners, HERE and Incheon Airport Corp., was disclosed. We are waiting for more partners to collaborate with us. We also introduced scalable and semantic indoor mapping (SSIM) that automatically maintains updated indoor maps. It is a technology that automates the indoor map creation process, data collection, and maintenance processes utilizing NAVER LABS’ technologies in robotics, computer vision, visual localization, machine learning, and so on. Currently, we are focusing on the POI change detection stage where a self-driving service robot operating in indoor spaces automatically detects changes in POI, and these changes are updated on the map. In the future, this will be extended to POI recognition and semantic mapping. The same technology will be applied in self-driving technology in outdoor and on road environments. An Autonomous Mobility Platform for Self-Driving Vehicles and Robots These days, mobility solutions do not apply only to people. Soon, self-driving technology for self-driving robots, not to mention self-driving vehicles, will penetrate deeply into our daily lives. The Autonomous Mobility Platform is a solution for self-driving machines. In this keynote address, we unveiled new HD mapping technology for self-driving vehicles. An HD map is essential data required for self-driving vehicles to identify their exact location and to search for the most optimal route to a destination. NAVER LABS utilizes Hybrid HD Map solutions to create HD maps for each city by organically integrating route networks extracted from precision aerial photographs and data collected by R1, NAVER LABS’s mobile mapping system. We are implementing algorithms for both 2D and 3D data that automatically extract the features required for mapping. In addition, based on this HD map, we are developing a solution that can accurately measure locations, even in shadowy areas like city centers where GPS signals cannot reach because of the high rise buildings, by combining the map with information collected through a self-driving vehicle’s GPS sensor, IMU sensor, CAN data, LIDAR signals, and camera visuals. Furthermore, we are collaborating with Qualcomm and Mando on research for ADAS technologies in connection with Hybrid HD Maps, and various other self-driving technologies. The AROUND platform is a solution for bringing self-driving service robots to the mainstream. It utilizes precision 3D maps, created with M1, and cloud-based route search algorithms to reduce the cost of robot production while also maintaining high quality self-driving performance. Unlike conventional self-driving robots which have to perform core functions, such as map creation, location identification, route creation, obstacle avoidance, and more, by themselves, this platform can bring about indoor self-driving with a high-degree of precision with only low-cost sensors and by using a small amount of processing power. Continuing from AROUND, which was used in YES24 book stores last year, we are now developing AROUND G, a self-driving guide robot that provides direction services in large-scale indoor spaces, such as shopping malls or airports. AROUND G will be outfitted with the AR navigation API to offer directions and guide with an even more intuitive UX. Ambient Intelligence Technologies for the Present, Not the Future In this keynote, we presented NAVER LABS’ research outcomes on optical technologies. AHEAD is a 3D AR HUD (Heads-Up Display). It is uses 3D display technology to deliver information to drivers in a way that will not make them shift their focal point. Since the actual view of the road that the driver is watching has the same focal point as the display, the driver can take in location and mobility information more easily and in a more natural way. In the future, various information and services provided by the xDM Platform may be delivered naturally to drivers through AHEAD. We are also working on the sophistication of AMBIDEX, the robot arm we unveiled last year, to make it safer for interaction in daily environments. Unlike conventional robots which are primarily focused on location control, controlling strength is more important for AMBIDEX. For this reason, we have developed a simulator for kinetic and dynamic modeling. By running a simulator test before powering up the robot, we have been able to improve safety and quickly collect a vast range of data for different conditions. NAVER LABS envisions a world where tools and technologies naturally coexist with our everyday life. Our presentation on the performance outcomes and the xDM Platform, through the DEVIEW 2018 keynote address, was part of our effort to realize that vision. We wish to understand the contexts of life in every space in which humanity resides, and to develop new services and tools based on that. We believe technology should understand people, people should not necessarily have to understand technology. NAVER LABS will not stop working towards the realization of this vision, and will continue to grow together with our partners, sharing our technology, and constantly introducing new platforms.
NAVER LABS is developing a search engine based on Foursquare’s point-of-interest (POI) data to provide a global localization service. The strategic partnership uses our natural language processing (NLP) and map service technologies. Foursquare has an enormous amount of global POI data. People from around the world use Foursquare’s service to visit places for different reasons and in different contexts. By adding our know-how and technology, we want to create an advanced POI search engine adapted to each individual’s needs. We also expect to develop new business models combining the data and technology from both companies. NAVER LABS’ conducts research in ambient intelligence. It supports users by providing information through the understanding or their environment and lifestyle which is centered on location and mobility. We see no frontier concerning a user or lifestyle – each is unique. As announced in the partnership with HERE, our collaboration with Foursquare extends our ambient intelligence vision to a global scale, opening the door to new services and technologies.
NAVER LABS has signed a Memo of Understanding with HERE to develop autonomous 3D indoor maps. Key to the creation of these maps is NAVER LABS Scalable & Semantic Indoor Mapping (SSIM) technology. The development of indoor maps relies heavily on human manual work making them not only lengthy and expensive, but also difficult to keep up to date. Our advanced SSIM technology is going to provide an efficient solution to automatically update Points of Interest (POI) in indoor environments where the information changes all the time. The blueprint for autonomous indoor mapping with HERE and SSIM is as follows: A 3D high resolution map is created with the laser scanner and high-performance camera of the mapping robot M1 which moves across the indoor area Data on the indoor space is continuously collected by the AROUND service robot The data AROUND collects is then analyzed by AI technology which detects any changes in the environment and updates the service in real time. We expect this automatic solution to revolutionize how indoor maps are created and maintained. Together with HERE, we’re ahead of the proof of concept of advanced SSIM. Thanks to this project we’ll be maturing the SSIM technology and expect to develop a cornerstone for indoor map construction and the foundation of future innovations.
An image based safe lane change (SLC) algorithm is proposed to aid the lane-change maneuvers for both autonomous driving agents and human drivers. A binary classification (free or blocked) is performed to secure the safety of the ego-vehicle's surroundings before moving to a target lane. For a precise classification, the SLC uses a Convolution Neural Network (ConvNet) that learns image features from large scale dataset. ConvNet is efficient in that is able to extract subtle image features what we haven't been obtained by hand-crafted functions before; however, we also doubt the nature of the ConvNet when those of outcomes are not aligned with our intuition. In fact, we cannot handle anomalous events if we are unenlightened how ConvNet works. We know road environment changes every moment; we therefore cautiously test autonomous driving functions before deploying on the road. In other words, it is essential that understanding the internal mechanisms of the ConvNet to adapt to the autonomous driving systems. From recent weakly-supervised object localization researches, we found a clue how the ConvNet makes decisions. In this article, we would like to introduce Class Activation Mapping (CAM) and analyze where the SLC algorithm sees on images. So, what is the weekly-supervised object localization task? To solve well-defined machine learning problems, supervised learning algorithms require plenty of data points and the corresponding ground truth labels. For an image classification, a dataset consists of images and the keywords that describe the images. On the other hand, to learn a model for object detection task, we need not only the object names but the image coordinates of the objects (see Fig. 1). As a task becomes difficult, we have to consume more time and cost to build a new dataset for supervised learning setups. Thus, researchers look for new methods to apply the existing large scale dataset to different domains. For an example, weakly-supervised object localization attacks object detection task using image classification datasets, where the object localization labels are missing. Fig 1: For an image, ground truth label varies depending on the tasks: examples of the ground truth labels for image classification (left) and those for object detection (right) How to learn a model for image classification? For image classification, the architecture of the most ConvNet can be divided into two parts: convolutional layers to compute image features and fully-connected layers for classification (see Fig. 2). Fig. 2: Image features are computed with convolutional layers, and go through the fully-connected layers for a prediction. Supervised learning algorithms attempt to reduce the differences between the prediction (x) and the ground truth (y) during the training phase. We lose spatial information while reshaping an image feature to input the followed fully-connected layers. In weakly-supervised object localization task, we exploit the interim image features that computed by convolutions and obtain the salient regions for a prediction. Thus, CAM algorithm assumes that the salient regions containing many parts of a certain object will be activated during the classification. More precisely, we explain the CAM algorithm with VGG16 network architecture. The VGG16 generates (512,7,7) size of image features at the last convolution layer when it takes (3,224,224) input image. Suppose the form of the image feature that is a (7,7) sized map having 512 different channels, each channel differently contributes to classification for the given object classes. Thus, CAM algorithms learns the relative importance of the channels at the followed fully-connected layer. Using those weights, we aggregate the feature maps over the channels and finally obtain a saliency map that interprets how does the ConvNet see on the images for a prediction (see Fig. 3) Fig. 3: Since in weakly supervised object localization task, we have no information of the objects locations in the image, we cannot apply the supervised learning regime to learn a model. Instead, CAM algorithm adaptively sums the image features, where the weights are identical to the parameter of the fully-connected layer followed the convolutions. We now see the activated areas where the ConvNet focuses to predict a class. Back to the stories of the autonomous driving research To learn an SLC model, we annotated rear-side view images, which are captured in various road environments, as followed criteria: Blocked if the ego-vehicle cannot physically move to the target lane; Free if the ego-vehicle can move to the target lane; and Undefined for an ambiguous situation such as crosswalk and any other unusual scenes. The annotation rules are akin to human driver’s’ decision making processes for lane-change -- we instantly decide to move a target lane by checking rear-side view mirrors. To tolerate various driving behaviors for building the dataset, we only take a ground truth label when the multiple annotation works agree with the status of the scene. Can the SLC model make a right prediction on the road where it has not been visited? Yes, we can. To examine the generalization performances of the SLC model, we tested images which are not used during the training phase and achieved 96.98% classification accuracies. Using the CAM, we also analyzed that the SLC model has been built on our purpose. We replaced the fully-connected layers of the SLC model with a 512 length of fully-connected layer. While the parameters of the convolution are fixed, we fine-tuned SLC model on the same dataset to obtain saliency maps. As shown in Fig. 4, similar to human drivers, the SLC model looks at the space of the adjacent lanes to judge the probability to succeed lane-change. Fig. 4: The classification result of the SLC model (left), and visualization result using CAM to highlight areas for a prediction (right) The following video was recorded inside of the autonomous driving car running on complex urban road environment, where the results of the perception algorithms are also displayed on the right. The SLC algorithm deployed in the NAVER LABS autonomous driving car secures the safety for lane-change operations. References 1) S.-G. Jeong, J. Kim, S. Kim, and J. Min, End-to-end Learning of Image based Lane-Change Decision, in Proc. IEEE IV’17 2) B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, Learning Deep Features for Discriminative Localization, in Proc. IEEE CVPR’16 3) matcaffe Implementation of class activation mapping: https://github.com/metalbubble/CAM 4) Keras Implementation of class activation mapping: https://github.com/jacobgil/keras-cam
At last year’s DEVIEW, the NAVER LABS robotics team announced the 3D indoor mapping robot, M1. Since then M1 has evolved into the product AROUND which was unveiled at this year’s annual conference. AROUND has been manufactured to increase the popularity of indoor autonomous robots whose high price tag has so far prevented their penetration in the consumer market. By making them more accessible, people will be able to experience a number of indoor autonomous driving robot services in different spaces and environments. The LABS solution distributes the core functions of autonomous driving that constitute a high proportion of the manufacturing costs. Up to now, robots had to produce maps, identify locations, create of routes and avoid obstacles. NAVER LABS has allocated these requirements to different devices that work in tandem. The devices developed by LABS are AROUND, M1 and map cloud. M1 produces the map, map cloud creates the routes and AROUND focusses on accurate autonomous driving and avoiding obstacles using only low-cost sensors and little processing power. The reduction in manufacturing costs will make it possible to mass produce customised, indoor service robots that can assist people in many different places and in many different ways. AROUND is scheduled to operate for the first time at the YES 24 bookstore at the F1963 shopping complex in Busan. AROUND will collect books that customers have finished browsing in its storage unit and move them to a designated place if they exceed a certain weight. From there, employees can collect the books to put them back. The collection solves one of the most tedious chores book store employees have to deal with on a daily basis. As books are computerized in the store, if even a single book is in the wrong place, employees need to check all the surrounding books. AROUND is expected to significantly relieve staff from such painstaking work. AROUND will change the reading experience in book stores because it connects the spaces where books are displayed with where people read them. AROUND will make it possible for people to choose their books and take them to a comfortable place for browsing instead of having to look at them standing up. When they’re done, they simply put them in AROUND who will take them away. The ambient intelligence of AROUND is its integration with the user context and the cultural characteristics of space to create a better experience.
NAVER LABS, an ambient intelligence company specialized in location & mobility announced AKI at DEVIEW 2017. ‘AKI,’ a location and mobility watch device for elementary school children and parents provides safety solutions by recognizing relationships as an important factor. Parents are naturally worried or concerned about their young children when they’re not with them. They’ll often want to know if they‘ve arrived safely at school or who they’re with at different times throughout the day. Children may also need to be reassured that someone will be there to pick them up after school and when. To answer these questions a number of pieces of information need to be gathered including accurate locations and places of where people are. AKI is designed to provide parents with information on where their children are at any time and can alert them when they’re in an unhabitual place or performing unusual activities and movements. AKI utilizes Naver Labs’ own WPS (WiFi positioning system), which provides the exact position even indoors and its automatically controlled, low power location detection recognizes behaviour. It is equipped with personalized Wi-Fi fingerprinting technology. AKI detects the exact location of the child and how the child is moving with an activity detector and movement classifier. It learns the pattern of the child’s daily routine by analyzing the place, time and situation, so that it can alert parents when there is ‘abnormality’ i.e. a place that is not part of daily routine to child's. When the location of a child has been accurately identified, the information can be communicated in a natural, contextualised way. NAVER LABS strives to apply ambient intelligence to mobile user environments. AKI identifies important parts of our lives provided by location-based information. The location of a child is precious information that parents of young children naturally want to have. AKI is equipped with the ambient intelligence philosophy and technology of NAVER LABS and will be available this year.
NAVER LABS has introduced AIRCART at the YES 24 bookstore. The electronic cart delivers books from the warehouse to the store. It was named ‘AIRCART’ because the motor automatically increases its power giving the impression that the cart is gliding, even when carrying heavy objects. Equipped with an automatic breaking system, it’s safe to go up and downhill. As bookstores can be busy places, AIRCART has been designed so that cart users can easily detect if there’s sufficient space in front of the cart to prevent collisions and for the safety of small children. The shelves of the cart are tilted inwards so that more books can be loaded and that they don’t fall out. AIRCART is equipped with physical human-robot interaction (pHRI) technology, a technology used in wearable human power amplifiers. The movement of the cart (momentum and direction) is controlled in real time by identifying the user’s intentions through the power sensor on the cart handle. This makes it easy for anyone to use AIRCART with no prior experience. NAVER LABS research in location and mobility is driven by the desire to provide natural, useful every day services that impact people’s lives and its research in robotics is no exception. AROUND and AIRCART are two examples of technologies that add value to people's lives. The NAVER LABS robotics team will continue collaborating with partners and entrepreneurs so that people can profit from new ambient intelligence services and products.
AMBIDEX is a robot arm that interacts very naturally with humans. It is the fruit of a long-term research project with Korea Tech and, in particular, with professor Yong-Jae Kim, a world leader in the field and a facility equipped with the world's best robotic arm mechanism designing capabilities. Robot arms have a long history in robotic research where they have mainly been developed for manufacturing purposes focused on precision, repetition and heavy-load work. This kind of heavy, bulky robot arm is not well suited to a home setting and could even be considered dangerous. NAVER LABS work in the areas of hardware, control, recognition and intelligence aims at making the robot arm in the home a reality. AMBIDEX, one of the fruits of such research, was unveiled on stage at DEVIEW. AMBIDEX is safe for people to interact with and even lighter than a human arm. AMBIDEX uses cable-driven mechanisms that place all the heavy actuators in the shoulder and body parts. This lightens the arms and means they can be driven with wires. Using innovative mechanisms that enhance the force and strength in each joint, AMBIDEX has achieved the same level of control, performance, and precision as industrial robots. AMBIDEX aims to be a breakthrough robotic hardware solution that can work safely, flexibly and precisely with humans.
At this year’s DEVIEW, a whole range of new ambient intelligence products and technologies were revealed in the NAVER LABS keynote. Ambient intelligence technology detects and understands humans and their contexts to naturally provide information or perform actions at the time of need. During his keynote, Changhyun Song, CEO of NAVER LABS and NAVER CTO, emphasized the motivation behind the ambient intelligence research he leads. “In this world where tools and information are overflowing, technology needs to understand humans and environments even better. The real value of technology will only be realized when it has become part of the fabric of everyday life”. All of the research results shared during the keynote contribute to the NAVER LABS’ vision of ambient intelligence and we will continue to focus on technology, products and services that directly impact people. NAVER LABS envisions a future where people and society are not restricted by tools and technology. It is a world where people can focus on the things they value most in life and where ambient intelligence helps them do so.
AWAY is an infotainment platform for vehicles with a user interface that enhances driver safety and which specifically optimizes music, news and other media services for the driving environment. The AWAY head unit gives drivers simultaneous access to various functions, from media content to navigation, on a wide 24:9 ratio screen that supports split view. AWAY has been deployed for vehicles operated by the Korean car sharing company Green Car. Green Car plans to install AWAY in 3,000 vehicles within the year.
NAVER Corporation and Xerox Corporation today announced an agreement for NAVER to acquire the Xerox Research Centre Europe in Grenoble, France. The French Works Council’s consultation on this project has now been completed and the agreement is expected to close in the third quarter, subject to fulfillment of certain customary conditions. Once the sale becomes final, all 80 plus researchers and administrative staff are expected to become part of NAVER LABS. Based in Seongnam, South Korea, NAVER is Korea’s leading Internet company, operating the nation’s top search portal “NAVER,” and other innovative services in the global market such as the mobile messenger LINE, video messenger SNOW and community app BAND. And NAVER LABS is an ambient intelligence company that develops future technologies including autonomous driving, robotics and artificial intelligence. Since its establishment as NAVER’s R&D division in 2013, it has led NAVER’s innovation in technology through products such as ‘Papago’, AI-based translation app, Whale, the omni-tasking web browser, and M1, the 3D indoor mapping robot. Founded in 1993, the Xerox Research Centre Europe is located just outside Grenoble, often dubbed the Silicon Valley of Europe. The centre has focused its research in artificial intelligence (AI), machine learning, computer vision, natural language processing and ethnography. “The research expertise at the European centre is perfectly aligned with NAVER LABS’. We expect immediate, powerful synergies” said Chang-hyeon Song, CEO of NAVER LABS, and CTO of NAVER. “XRCE's world class R&D achievements in AI technology, including computer vision and machine learning, will significantly strengthen NAVER LABS’ research in ‘ambient intelligence’ including autonomous vehicles, AI/deep learning, intelligent 3D mapping, robotics and natural language processing.” With such a strong foothold in Europe, NAVER LABS expects to considerably accelerate its development of ambient intelligence technologies around the globe and in particular in AI. NAVER LABS Europe hompage
The autonomous vehicle developed by NAVER LABS was the first in South Korea's IT industry to receive a temporary operating permit from the Ministry of Land, Infrastructure and Transport in February 2017. This allowed us to add to our autonomous driving technologies by combining data on actual driving conditions with the deep learning technologies that we had already amassed. In the future, we are planning to develop safer and more convenient mobility solutions by conducting research into additional autonomous driving technologies. We will also continue to turn numerous possibilities created by the connection of cars and data into safety and convenience on actual roads.
M1 is an indoor 3D/HD mapping robot that navigates autonomously in indoor spaces. M1 automatically collects high-resolution images and 3D spatial data via high-performance cameras and LiDAR, significantly improving the efficiency of what was previously a manual mapping process. The resulting HD maps provide spatial data that is essential to location-based services.
Company overview Founded in 2013 as NAVER's research center, NAVER LABS spun off as a separate entity in 2017 to focus its research on ambient intelligence in areas such as autonomous driving, robotics, artificial intelligence and geospatial data. NAVER LABS' mission is to achieve ambient intelligence that enriches user environments with technology that proactively understands and provides them with information and services. In line with this mission, distinguished researchers from Korea and Europe are committed to understanding the places people carry out their lives to connect these locations and shape the future of mobility. Corporate media contents [Video] NAVER LABS, an Ambient Intelligence company [Video] NAVER LABS Intelligence in Mobility concept [Video] NAVER LABS Robot M1 [Video] NAVER LABS Space & Mobility Interview [Video] NAVER LABS M1 3D indoor mapping process [Video] NAVER LABS IVI (In-vehicle infotainment) [Video] NAVER LABS AROUND indoor robot [Video] NAVER LABS AMBIDEX robotic arm [Video] NAVER LABS AIRCART power secsitive cart Corporate media channel Web site Facebook Instagram Youtube SlideShare Behance