NAVER LABS launched the Mapping & Localization Challenge on April 8 to raise awareness about the importance of image-based localization technology, which is taking the world by storm, and support university researchers’ studies across Korea. Challenge participants will compete for the accuracy of visual localization (VL) in two tracks: indoor and outdoor. VL is a technology that allows for high-precision localization where Global Positioning System (GPS) signals are weak, such as indoor spaces, skyscraper-filled city centers, and tunnels, due to accurate six degrees of freedom (6DOF) pose estimation using only camera sensors. NAVER LABS is providing the latest self-produced datasets, which have been used for actual research, to all participants of the NAVER LABS Mapping & Localization Challenge. This article is intended to introduce how the challenge is run as well as how the indoor and outdoor datasets disclosed to researchers have been created. 1. Dataset Building Process 1) Indoor dataset building; NAVER LABS’ LiDAR SLAM First, the building process for the indoor dataset. NAVER LABS uses a mapping robot called M1X, which has various cameras, smartphones and high-precision LiDAR sensors equipped on its main body, to generate indoor maps. NAVER LABS has also developed a backpack-type mapping device called COMET to be used for spaces with irregular surfaces such as stairs. An integral technology for mapping is NAVER LABS’ own high-precision LiDAR SLAM (LiDAR Simultaneous Localization And Mapping). One of its biggest advantages is that is can correct distortions from trajectory estimations by computing LiDAR-based odometry for environments where wheel-odometry cannot be obtained. This estimated odometry is used for initial trajectories between sequential LiDAR data, enabling for a more precise and robust mapping. Indoor mapping using LiDAR odometry In addition, a method of matching the wide array of data collected at different times into one is essential. NAVER LABS employs something called loop-closure, a method of recognizing a previous visited location and updates beliefs accordingly. Loop-closure based on LiDAR data enables very stable and precise matching between datasets. Data matching through loop closure As seen in the following, high-precision maps are created by matching processes such as mapping via M1X and COMET as well as LiDAR SLAM and loop-closure. The high-precision data contained in the map allows to accurately estimate camera poses. 2) Outdoor dataset building; distortion correction and high-precision localization data Next, the building process for the outdoor dataset. The outdoor dataset was extracted from NAVER LABS’ HD map production process that combine aerial photographs and MMS data using NAVER LABS’ in-house developed MMS vehicle called R1. R1 can collect image/geometric data using its multiple cameras and LiDAR sensors. This outdoor dataset contains stereo camera images taken in front of the vehicle and omnidirectional geometric data collected by LiDAR sensors mounted on top. There are two features to this dataset worth mentioning. First is distortion correction during geometric data collection. This outdoor dataset utilizes geometric data collected using R1’s LiDAR sensors after driving for more than 5 hours in Pangyo and Yeouido, of which approximately 130,000 frames of 3D point cloud data collected, excluding the vehicle’s idle time, will be provided to the challenge participants. Unfortunately, raw LiDAR data collected at high speeds may suffer from 3D geometric distortion depending on the vehicle’s speed at the time of data acquisition. Therefore, NAVER LABS applied its advanced localization technique to accurately calculate the vehicle’s pose and speed to correct such geometric distortions. Geometric distortion correction through precise localization Second is providing high-precision localization data. For the outdoor dataset, NAVER LABS is providing accurate localization data, including stereo images and geometric data collected at the time of data acquisition (i.e. R1’s pose data). While R1 has high-performance GPS and can precisely localize the vehicle, the urban areas of Pangyo and Yeouido have some GPS signal disruptions due to high-rise buildings, rendering some of the localized vehicle’s pose data unreliable. To compensate for this phenomenon, NAVER LABS used the high-precision localization technology utilized for its autonomous driving research to provide improved and precise pose data, which is also used in evaluating localization results submitted from participants. 3) Protection of drivers’ vehicle information and pedestrians’ personality rights Pedestrians’ faces and vehicles’ license plates have been blurred in the datasets provided for this challenge in order to protect drivers’ and pedestrians’ personal information as well as their personality rights. For efficient multi-scale learning and inference, NAVER LABS applied SNIPER, which was revealed at NeurIPS 2018, and the AutoFocus algorithm, which was revealed at ICCV 2019, and supplemented manually using LabelMe to process the secondary steps of Gaussian blur and Median filter to the blurring areas. Indoor/outdoor dataset blurring 2. Localization Baseline Technique To encourage many Korean researchers to join the challenge and suggest performance evaluation criteria, NAVER LABS disclosed the localization performance of indoor and outdoor track baseline algorithms on a leaderboard at the start of this challenge. The indoor track baseline algorithm used a hybrid technique based on reference image extraction and keypoint matching, which are used widely for single image localization. 1) Indoor Track Baseline For the indoor track baseline, a hybrid technique which combines RootSIFT and NetVLAD was used. NetVLAD is one of the deep learning-based methods suggested to solve the problem of image retrieval. NetVLAD architecture  With this technique, you can extract multiple reference images that are most relevant to the query image. To identify the correlation between extracted reference image and query image, keypoints, which can connect two images, are extracted. The scale-invariant feature transform (SIFT) used here is a traditional algorithm to extract keypoints, which extracts features that are invariant to image sizes and rotations. Each detected keypoint’s unique value is referred to as a descriptor, and RootSIFT is an algorithm that normalizes this value and improves the performance of SIFT. Extracted SIFT RootSIFT keypoints Using these descriptors, keypoint matching of different query and reference images can be performed as shown below. Example of keypoint matching detected in different images After calculating the 3D points corresponding to the reference image’s keypoints, Perspective-n-Point (PnP) solver is used to estimate the query image’s 6DoF pose. 6DoF pose estimated by indoor VL pipeline 2) Outdoor Track Baseline While the outdoor track baseline is similar to the indoor baseline, it is a slightly different hybrid technique that uses R2D2 (tuned) and NetVLAD. The dataset disclosed for the outdoor track is provided with accurate poses of all mapping images. Hence, you can find the mapping image that is most similar to a given test image and obtain the pose similar to that in the test image. When you project the LiDAR geometric data, which is provided as mapping data, into the mapping image, you can obtain 3D coordinates for keypoints of the mapping image. With the PnP algorithm, which uses the correspondence between 3D geometric data and 2D coordinates for keypoints matched between test image and mapping image, you can estimate the test image’s pose. Schematic diagram of the outdoor track baseline algorithm The outdoor track baseline algorithm uses the global image descriptor extracted by NetVLAD to index and search mapping images, and uses the R2D2, a keypoint detection algorithm developed by NAVER LABS Europe in 2019, fine-tuned using driving images to detect and match image keypoints. Since the above baseline algorithm is the localization result only with final frames in each stereo video given as a test case, we expect that any participant who fully utilizes given data can show much better localization performance. This has been a brief introduction to the dataset building process and the baseline localization technique for the NAVER LABS Mapping & Localization Challenge. NAVER LABS is in full support of the many researchers participating in the challenge. For more information and inquiries regarding the challenge, please visit the website below. Go to the Mapping & Localization Challenge website Reference  Arandjelovic, Relja, et al. "NetVLAD: CNN architecture for weakly supervised place recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
Mobile robots are increasingly rolling into a wide variety of services and everyday spaces. Therefore, it is now essential to research the autonomous driving software for robots to overcome more diverse and complex issues, rather than simply enabling them to arrive at the destination without colliding with obstacles AROUND C surrounded by children–a common situation for mobile robots deployed in daily living spaces. For example, in an environment without any human presence and structural changes, a robot only needs to find the shortest distance to the extent that it does not physically collide with obstacles. It is relatively simple. However, in the case of service robots in restaurants and cafes, many people stand or move around in their vicinity. Thus, they should be able to move around in consideration of changes in the size, shape and position of obstacles every second. Above all, “people” are the most challenging obstacles for robots in everyday spaces. If a robot gets close to people it encounters at high speed, it may cause anxiety in people even if no collision occurs. Therefore, more caution needs to be exercised to avoid such situations, keeping a further distance from people than for obstacles. To address this issue, the majority of robots deployed in living spaces drive at slow speeds, and if an unexpected obstacle (or person) appears, they use a strategy to stop in place and wait for it (or the person) to disappear. However, exercising too much caution slows down the robot's original mission (i.e. delivery), causing users to feel frustrated. From a software engineer's perspective, it is challenging to balance the importance (weighted value) between subjective factors such as these “anxiety” and “frustration” and to create autonomous driving algorithms. As the weighted value that satisfies many people varies by the space where the robot is located, the work that it performs and the user it deals with, finding a good weighted value also requires many trials and errors; even if one is found, there is no guarantee that the driving algorithm will function normally when the value is entered. To address such complex issues and design robots that can be rapidly assigned to various spaces for diverse tasks, NAVER LABS used meta deep reinforcement learning and Bayesian active learning. Meta deep reinforcement learning 1) Deep reinforcement learning Reinforcement learning is a process in which an agent that interacts with the environment learns the policy, the function that determines what actions the agent should take in each state, in order to maximize the accumulative rewards it will obtain from the present point of view, while observing the state of the environment, affecting the environment through actions, and experiencing the resulting rewards. Recent studies that incorporate deep learning into reinforcement learning have produced great results in the field of decision-making or control, such as AlphaGo and OpenAI Five, and a large body of research is ongoing to apply them to robots’ behavior control. NAVER LABS has also studied deep reinforcement learning for autonomous driving of AROUND robots and unveiled them at events such as CES and ICRA. Autonomous driving algorithms that employ deep reinforcement learning do not require accurate maps and can predict optimal actions that take into account both current and future rewards very rapidly with the help of the GPU, but there are issues as the learning requires a tremendous amount of data and time, and adaptation is unavailable without additional learning if robots or reward settings change; making it unavailable for mobile-service robots that have to behave in different ways depending on the characteristics of each place, work or user. For example, if an agent who had learned in an environment where it could move at a speed of 0.4 m/s and got a reward of -1 when getting close to a person undergoes modification to move at a maximum speed of 0.6 m/s and receive -3 rewards when its gets close to a person due to unsatisfied users, it takes a very long time for the agent to become satisfactory as it has to learn again from scratch through countless iterations. 2) Meta reinforcement learning To address this issue, NAVER LABS used relied on meta reinforcement learning. Meta learning is a subfield of machine learning that enables 'learning to learn' using the method of offering learning for AI agents to solve many kinds of problems that belong to a certain distribution and enabling the learned agents to adapt quickly when new problems within that distribution is given. Simulator using procedural generation technology NAVER LABS has utilized the method of generating a new indoor environment in the simulator every time and sampling random robots and reward settings to allow AROUND robots to adapt immediately to various robot settings (maximum speed, rotation velocity, acceleration, etc.) and environmental settings (the structure and size of a space, rewards when opting the shortest distance, rewards when colliding with obstacles, rewards when getting too close to people, etc.). In addition, NAVER LABS has released more than ten robots in one environment and enabled learning at the same time to greatly increase the amount of data available for learning, allowing them to develop the ability to deal with moving obstacles while avoiding each other. A still from a driving video of learned agents As a result of using meta reinforcement learning, we could obtain agents that are capable of immediately coordinating the behavior and displaying the performance level that is equivalent to learned agents by using only the corresponding settings when the settings within the learned range are designated. Performance comparison of meta reinforcement learning agents and agents having learned using only the corresponding settings regarding various settings (refer to the paper for further details) 3) Bayesian active learning Meta deep reinforcement leaning was used to obtain agents capable of adapting to different settings, but it was still difficult to figure out which setting was best for space, work and user for which the robots are to be used. In many cases, it requires a great deal of labor and time due to the use of the trial and error method where engineers choose adequate settings at their discretion, receive feedback from users or UX designer, and gradually modify them. To speed up this process and promote efficiency, NAVER LABS looked into the preferences for various settings in simulations and developed a Bayesian neural network based algorithm through joint researches with NAVER LABS Europe to efficiently select optimal setting candidates. A Bayesian neural network (BNN) is a model that represents parameters of the neural network as probability distributions rather than fixed values, which has the benefit of being able to show a good performance even if the data available for learning is small in number and providing the criteria of uncertainty about predicted values. By projecting which settings would be more likely to be preferred from a small amount of preference survey data, and using those that are expected to be highly preferred but carry a high level of uncertainty at the same time in the next preference survey, we were able to reduce the number of preference surveys required to find the optimal settings. A still from a video of the survey which used the simulation AROUND C Pilot service To test the performance of the newly developed algorithm, a pilot service test was conducted, enabling a robot to serve beverages at Bear Better Cafe, located on the first floor of NAVER Green Factory, from November to December 2019. NAVER LABS Robotics and UX Teams found a setting suitable for the café service through a preference survey, and applied the setting to the meta reinforcement learning agent to allow the robot to drive autonomously. The test confirmed that AROUND C delivered beverages rapidly and safely through crowds, even in chaotic and complex situations where it is almost impossible for ordinary service robots to operate. AROUND C serving drinks in a crowded and complex situation Conclusion The paper on meta reinforcement learning and Bayesian active learning technology employed in AROUND C was adopted by the ICRA 2020, the world's largest robotics conference, and is waiting to be presented. After successfully completing the pilot test, NAVER LABS has continued to conduct research on reinforcement learning to solve problems found in the service test and achieve higher performance. AROUND robots equipped with further upgraded autonomous driving algorithms will be applied throughout the new office building of NAVER. We are hiring deep reinforcement learning engineers to apply reinforcement learning to more problems in the future, and we look forward to receiving applications from many people who are interested in state-of-the-art researches on reinforcement learning. References  Silver, David, et al. "Mastering the game of go without human knowledge." Nature 550.7676 (2017): 354-359.  Berner, Christopher, et al. "Dota 2 with Large Scale Deep Reinforcement Learning." arXiv preprint arXiv:1912.06680 (2019).  https://www.youtube.com/watch?v=jqSztRdd-mc  Choi, Jinyoung, et al. "Deep reinforcement learning of navigation in a complex and crowded environment with a limited field of view." 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019.  Blundell, Charles, et al. "Weight uncertainty in neural networks." arXiv preprint arXiv:1505.05424 (2015).  Choi, Jinyoung, et al. "Fast Adaptation of Deep Reinforcement Learning-Based Navigation Skills to Human Preference."
It is now no longer weird to say that autonomous driving is where the future of the road leads. To get there, we have to jump through all sorts of hoops, and one of them is HD maps. While typical maps have communicative features intended for people to see and understand, HD maps must be more specific and clear as they are intended for machines. HD maps include not only connections between roads, but also the number and type of lanes, connections between lanes, features of the road surface, and road traffic control devices including markers, traffic lights and signs. The amount and types of such information, however, are much larger and diverse than those in existing maps used for driving navigation. Hence, NAVER LABS came to realize that HD maps cannot be built in the traditional way. By combining our AI and aerial image processing capabilities, NAVER LABS unveiled hybrid HD mapping, which organically integrates aerial images of a large area at the city level and data collected by the mobile mapping system (MMS). While the MMS acquires specific information on the road and completes an HD map, a 3D model made of aerial images lays the foundation for a balanced map that covers what the MMS cannot see. In this article, we would like to introduce the main process of building a city-scale 3D model based on aerial image. Photogrammetry Photogrammetry is essential for building an aerial image based 3D model, we need photogrammetry. Simply put, photogrammetry is measuring the three-dimensional (3D) real world via image. In other words, it is about restoring 2D aerial images back into the 3D real world. Then, how can we restore 2D images into 3D? The key is to use disparity. Figure 1 shows the same object photographed from the left and right. By overlapping the two images, we see that the closer the camera distance the greater the disparity, and vice versa. Using this disparity, we can turn two or more 2D images into 3D like the bottom right of Figure 1. In other words, the object in an aerial image can be a building roof if it is close to the camera or the ground if it is far away from the camera. [Figure 1] Once you understand how 2D images turn into 3D, you may notice that the pose of images is crucial. As seen in Figure 2, when the pose of a image is not correct, it can turn into a different location in 3D. Hence, it is essential to accurately adjust the pose of images. [Figure 2] NAVER LABS restored thousands of aerial images into the 3D real world following the procedure in Figure 3. [Figure 3] First, the same points in the thousands of images are connected as in Figure 4. This is called image matching, and such connections in images allow us to accurately estimate the pose of images. [Figure 4] After connecting the images, we estimate the accurate pose of images. This is called bundle adjustment (BA), and this technology is an integral part of photogrammetry. NAVER LABS performed batch BA and was able to accurately estimate the pose of thousands of images. Figure 5 visualizes this BA. While the images have different poses, they are aligned into an accurate pose through BA. This can also be referred to as optimization. [Figure 5] During BA, NAVER LABS adds the following condition: ground control points (GCPs). GCPs refer to points of the photographed area measured by 3D. Why do we need GCPs? The real world (3D) has a pre-defined coordinate system. To represent them in 3D according to the actual location on Earth, taking into account the Earth’s curvature, NAVER LABS imposed limits on GCPs to perform BA. Figure 6 illustrates how BA is performed depending on the GCPs. [Figure 6] When the poses of aerial images are aligned, we can calculate the 3D structure of the ground surface. Digital surface model (DSM) As explained above, we can calculate the distance of the projected object in the image by using disparity. If you use commonly used SIFT or SURF matching points from images whose poses have been estimated, you can calculate point-level spatial coordinates. However, to make a 3D model that includes features, insufficient points and mismatching points due to repeated patterns, you need a “dense matching” algorithm. First, as in Figure 7, by changing the distance (depth = disparity) for each pixel (x, y) of a image (master), we establish the cost volume by quantifying how similar it is to adjacent images (slave) in each depth. [Figure 7] Methods to quantify the matching cost include absolute difference (AD), sum of absolute difference (SAD), normalized cross-correlation (NCC), census (from SGM) and DAISY. As the 3D cost volume filled like this includes various noise, global optimization is applied to find the most stable distance while each pixel affects one another in 4 or 8 directions. Some of the most commonly used global optimization techniques include belief propagation, semi-global matching (SGM) and graph cuts. Since one digital image can only obtain discrete depth, dense matching results estimated from multiple images taken for the same area in different locations are combined to create a continuous 3D DSM. 3D model in Seoul With this above process, we completed the DSM for the entirety of Seoul in 2019. Figure 8 applies different pseudo-colors depending on the height of the DSM, and Figure 9 completes the DSM with a 3D model and processes aerial images up to texture. NAVER LABS is planning to update the Seoul 3D model and HD map with the latest photographs in 2020 and is preparing a method that effectively manages the life cycle of HD maps. [Figure 8] [Figure 9]
The kind of talent NAVER LABS seeks is a passionate self-motivated team player. Perhaps the term, “a self-motivated team player,” is an oxymoron. Even still, we have continued to venture out and the culture continues to snowball. We present the stories of experts in various fields cooperating without boundaries, making decisions on their own and taking on challenges together. Here is how to work at NAVER LABS. See all series of H2W@NL Last November, NAVER LABS distributed its HD map dataset for free, the first company in Korea to do so. This was to help a large number of autonomous driving technology researchers at home. Then, why are HD maps important for autonomous driving research? HD maps ensure safe and effective autonomous driving. HD maps and synced sensor data allow accurate and uninterrupted recognition of the current location even in the city full of high-rise buildings, provide a comprehensive view of intricate mazes of narrow streets to effectively plan routes, and spot traffic lights/crosswalks in advance via HD maps to raise the accuracy of real-time recognition. Therefore, from the very beginning of autonomous driving research, NAVER LABS has been concurrently working on HD map solutions. This has led to the development of hybrid HD mapping. This technology combines aerial photogrammetry and MMS data to create high-precision maps. How was the most original mapping solution that has never been attempted before invented? Let's hear the stories from the star players themselves. Q Why do you work on developing HD maps? HD maps, a step toward autonomous driving (Hyung Joon Kim | System Software Development) The era of autonomous driving is within reach. In this new era, the first thing we need are HD maps. This is because autonomous driving vehicles need very precise information on each and every lane to drive safely on the road. While it is common to create HD maps with road data collected by mobile mapping system (MMS) vehicles, this approach is time-consuming and costly. As the region grows wider, more resources are required. Therefore, we wanted to find a way to dramatically reduce the required resources. We continued to search for a solution that would create maps for larger urban areas in a faster and more efficient manner, while still maintaining accuracy. As a result, the hybrid HD mapping technology of NAVER LABS was developed. The layout of roads in large areas and building information amongst others were obtained through aerial photographs, which were then combined with data acquired with our own MMS vehicle, R1, to create HD maps. NAVER LABS R1 can create HD maps with minimal driving, dramatically reducing the time and cost required. (Junho Jeon | Visual Feature Map Development) The completed HD maps contain high-precision information that is essential for autonomous driving on the road. The information includes the road layout map, which is the structural information of the road, and the point cloud map containing geometric information, and the visual feature map that displays visual information. (Yong-ho Shin | Sensor Calibration) We were able to design and create a new way of the hybrid HD mapping because we had already internalized both autonomous driving and aerial photogrammetry-based mapping technologies. A distinctive solution for efficient creation of HD maps for cities (Jinhan Lee | PM/Software Development) In fact, there are many companies that research autonomous driving technologies. However, not many of them have their own HD mapping technology. NAVER LABS was like that, too, at first. Around the time our autonomous driving project began in 2016, we felt something great was missing as we didn't have our own HD mapping technology. HD maps are vessels, which store information that is hard to come by with sensors only, and play major roles in improving the performance of autonomous driving vehicles. That is why we have internalized this way of creating our vessel. We now have our own solution to efficiently build the HD maps for cities. In fact, this achievement has been applied to the localization aimed at upgrading our autonomous driving technology. Q How did you collaborate during development? Output leads straight to new input (Jinhan Lee | PM/Software Development) A number of experts from a variety of fields have joined to develop the hybrid HD mapping. The outcome of one project serves as an input for another project under this structure. For example, the development project for the R1 hardware equipment led to the sensor calibration project, while the road layout data became linked to the MMS data. Such organic interdependencies secured the successful development. (Unghui Lee | Sensor Data Tool Development) NAVER LABS’ internally developed MMS vehicle R1 is equipped with a large number of sensors, including multiple cameras, radars, satellite navigations, and gyroscope sensors. The development of drivers for these individual sensors required our development of a system that simultaneously stores incoming data from all sensors without any data loss as well as an operating software. (Yong-ho Shin | Sensor Calibration) There is an integral process for the R1 to converge the collected data. It is the calibration. Relative gaps in the location and direction occur between each sensor, which needs to be precisely matched through the calibration. Otherwise, the collected data will not be available for proper use. Convergence of data obtained from the sky and on roads to create HD city maps (Jinseok Kim | Aerial Mapping) If the R1 takes care of the ground, we use information captured from the sky. We have developed a way to dramatically improve accuracy through aerial photography. We create vertical true orthophotographs with removed distortions at 8-cm resolutions from aerial photographs, and then create 2D/3D road layouts for the road area. By combining this with the point cloud data collected by the R1, we can rapidly and efficiently create HD maps for large areas. (Juntaek Lim | LiDAR Feature Map Development) As such, combining the point cloud of roads collected by the R1 and the road layouts for large areas scanned by aircraft is a completely original solution. By all means, just piecing them together does not give out the HD map right away. It is essential to create a deep learning model that deletes unnecessary parts such as vehicles and people from the scanned data and to extract keypoints for vehicles and robots that will use the HD map. Experts from different fields, but a united team (Junho Jeon | Visual Feature Map Development) Each group of experts develops the necessary elements of HD maps, such as road layout map/point cloud map/visual feature map, to create HD maps that well reflects these data. The mapping solution has been created through a collaboration among a number of different teams. Experts from a wide variety of fields, such as convergence and recognition of aerial photogrammetric data, the establishment of equipment and sensor systems for data collection by MMS vehicles, location recognition technology using GPS and LiDAR data, and deep learning technology for extracting visual information, are assembled into a team. I believe that this close cooperation under the same objective allows for high-quality research and development. “Outcomes do matter. However, I believe that the process of defining issues and searching for solutions together as one team is more important. Because that ultimately leads to best outcomes.” (Hyung Joon Kim | System Software Development) The fact that organic collaboration among experts from various fields is achieved at all times serves as a significant driving force when a project is faced with challenges. Once, an issue arose to the stability of the data acquisition system. Faced with the issue, hardware engineers and software engineers all got together to conduct a joint examination. We went around the field to verify the situation together at the time the issue occurred, and it was the mechanical engineers who found the root cause and solved the issue. (Sang-jin Kim | Hardware Design) I remember that, too. Intermittent short circuits due to vehicle vibration caused the issue. I believe that the key to finding the most correct solution within a short period of time is, after all, organic teamwork. (Yong-ho Shin | Sensor Calibration) I would also like to reiterate how well we collaborate as if we are all on the same team. It is a great experience to be able to focus on our work simply under the goal of doing well together. Q How is the progress, and what about the objective? Building a 2,000 km-long road layout map of Seoul (Jinseok Kim | Aerial Mapping) We have completed building a road layout map covering a distance of over 2,000 km of +4-lane roads in Seoul. The road structure information required for autonomous driving (road surface markings such as lane markings, centerlines, stop lines, left turn markings, etc.) has been converted to precise vector data formats. In terms of mapping a large city like Seoul, it is the nation’s only technology. (Hyung Joon Kim | System Software Development) The establishment of our own process for the hybrid HD mapping has allowed us to create HD maps for desired areas with minimal effort compared to before. The freely-disclosed HD maps for Pangyo and Sangam areas also mark the achievement. (Jinhan Lee | PM/Software Development) It really felt rewarding when DEVIEW announced the free distribution of HD maps for the Sangam/Pangyo areas. Many organizations that conduct research on autonomous driving in Korea applied to obtain the dataset. I hope that our HD maps will further boost the development of autonomous driving technology in Korea. (Junho Jeon | Visual Feature Map Development) The HD maps of NAVER LABS aim to precise localization on the road. For example, as for the visual feature map, the minimum visual information and geometric information necessary to recognize locations are compressed in the descriptor format, making the size of data for large urban areas very small. We will continue to make similar optimizations. A step closer to a world of future mobility (Sang-jin Kim | Hardware Design) The ultimate goal of upgrading the mapping system is to create highly reliable maps. We remain committed to developing various technologies to rapidly improve the reliability/flexibility/operational capacity of hardware systems and implement them at a lower cost. I am confident that as we obtain more research outcomes and collect more high-precision data, we will be able to bring forward the world of future mobility that we have only envisioned before.
International researchers are researching the most advanced AI technologies in Grenoble, France, the so-called “Silicon Valley” of Europe. Read on to find out more about NAVER LABS Europe. About 110 talented researchers from 26 countries are gathered in NAVER LABS Europe, including Christopher Dance, one of the most authoritative researchers in machine learning, Florent Perronnin, who worked as Director of Facebook AI Research (FAIR), and Gabriela Csurka, who led the development of R2D2 which precisely identifies the location regardless of a changing environment, as well as other researchers whose paper is cited more than 10,000 times in the AI field. Recently, NAVER LABS Europe hosted the global AI for Robotics workshop, which also served as the starting point for NAVER’s Global AI R&D Belt, and is committed to its role as a new hub where researchers from Asia and Europe exchange and collaborate together. Here are some interviews with key researchers at NAVER LABS Europe, who are trying to solve problems in our daily life with various AI technologies such as natural language which understands language and context, computer vision which understands the actual spaces in daily life, and AI for robotics. France’s Largest AI Research Centre Michel Gastaldo (Centre Director) │ NAVER LABS Europe is the largest AI industrial research center. It is bigger than Facebook’s or Google’s AI research centers. Various projects are being conducted with universities in Europe and the US.Michel Gastaldo Naila Murray (Laboratory Manager) │ We are conducting research in various fields such as machine learning, computer vision, natural language processing, search suggestions, and AI for robotics. Moreover, I believe that research on system AI and UX is also important. Various perspectives from researchers in different fields and approaches different from the traditional ones are our core competencies. Michel Gastaldo │ We are currently moving towards two goals: on the one hand, we provide the existing users of NAVER services with differentiated experience and assistance through AI technologies; while, on the other, we take on new challenges in fields that have not been around before such as AI for robotics. Naila Murray Naila Murray │ We are not just seeking to showcase new technologies one after another. We are also devoting ourselves to setting and presenting a research agenda as to what future holds for AI. Major Research Areas In Connection to Our Lives ■ Deep Learning Julien Perez (Machine Learning) │ Most researchers in NAVER LABS Europe are conducting research on deep learning. From understanding computer vision and natural language to controlling robotic arms, we have been applying and researching deep learning in various fields.Julien Perez There is something very interesting that I have researched recently–machine reading. It is a technology in which the machine learns and understands text like Wikipedia articles by itself, and provides answers in natural language. It is like children learning how to read. We have focused on machine reading for the past five years, and we are starting to yield very encouraging outcomes. What we are aiming for is to move beyond the current level of understanding about machine learning and deep learning. The current models have hit their limits in commonsense knowledge as well as commonsense acquisition and reasoning. This is where machines cannot outperform humans, even children. This is why machines provide simpler answers than requested by us. We want to solve problems like that. ■ Natural Language Processing Matthias Galle (Natural Language Processing) │ The Natural Language Processing Team teaches computers how to understand and use human language. It is about teaching the computer to read or write text on its own, or both simultaneously. Matthias Galle Our team won two important academic competitions in 2019 alone. It included tasks like translating text containing noise as well as drawing up and translating basketball game summary. It was a competition where we competed with MS, Baidu, CMU, and Johns Hopkins University. As such, we can take pride in the technical know-how that we have accumulated. There is a toolkit that we have developed for several years. This toolkit applies very comprehensive methods to understand a document. For instance, when we look at a document, we can accurately identify a difference between footnote and title. It is because they are represented differently as in different fonts. We have applied such intuitive elements to this toolkit. It can be used to understand a restaurant menu. Or, it can be also used to understand a historical document from the 19th century. In fact, this toolkit is playing an important role in interpreting a massive amount of old texts in Europe. The project is called ‘Time Machine’ and it is currently conducted across Europe as a history digitalizing project. ■ AI for Robotics Christopher Dance (Research Fellow) │ For instance, research institutes of which I know usually separate robotics and AI. But here at NAVER LABS Europe, we combine them together. When we talk to some prestigious researchers at universities, they soon acknowledge that our technologies applying AI to robotics is cutting-edge. Christopher Dance I believe that machine learning has penetrated 90% of what humans do, and we are using it for robotic mobility. We are researching how to learn with few samples. While millions of input samples are required to learn how to play a game of Go, robots cannot afford to have that many samples yet. We are focusing on samples that are most necessary for robots to perform tasks effectively. We also improved robots’ autonomous driving performance. There are conflicting conditions to consider. A robot must maintain a safe distance from humans in the same space while not being too far away. While it should not be stuck in a narrow space that it cannot escape, it should not always avoid such a space. While it should complete tasks quickly, it should not move too fast and make people feel unsafe or uncomfortable. Ultimately, it’s about finding a balance between them. The Computer Vision Team started an interesting project. It is very challenging for robots to pick things up. It’s very different from how humans perform such actions. To make robots pick things up better, we should first identify what data or understanding is required. It is quite interesting to find new methods during this process. ■ Computer Vision Martin Humenberger (3D Vision) │ I am part of the 3D Vision Team, and we are conducting research on various areas such as keypoint extraction, feature matching, visual localization, camera pose estimation, and 3D scene understanding. Martin-Humenberger Among them, R2D2 (Repeatable and Reliable Detector and Descriptor) is a technology that we’ve developed for feature data extraction. It can detect the required area in image information based on data. The computer can learn by itself and find relevance between images featuring different weather or lighting in the same place, in a completely new way. With this technology, we won the first place in Visual Localization Challenge at CVPR 2019. We are also conducting joint research to estimate depth and pose in videos. This is a very useful technology to understand 3D scenes in fields such as autonomous driving. Gregory Rogez (Computer Vision) │ In particular, our 3D post estimation technology for instances with multiple people is the world’s finest. It is good at identifying hidden parts. It works very well even when some part of a person is hidden or out of the scene. It can be applied precisely to hands only. This technology attracts much attention every time it is demonstrated at a conference. The Computer Vision Team has also been researching image representation and visual search. With visual data, computers can understand human behavior and text in images. Currently, the Computer Vision Team at NAVER LABS Europe has the most recognized researchers in this field. We will attract more talented researchers and continue to conduct exceptional research. It couldn’t be a better environment for researchers. Gregory Rogez Global AI R&D Belt for Bigger Possibilities Christopher Dance │ AI research is gaining momentum in Europe. For instance, there is DeepMind; a researcher who made great contribution to AI research for Google is based in London. In addition, great research centers are being interconnected together. If you look at those companies, however, those from China and the US are dominant. In particular, China has grown rapidly in the field of computer vision, Google is dominating machine learning, and Microsoft is also catching up rapidly. Michel Gastaldo │ We wish to disrupt this trend. Researchers from Asia and Europe will be a strong alternative across the globe, and NAVER’s Global AI R&D Belt is a great opportunity. It is going to be a new platform of collaboration for AI researchers in Europe and Asia. It will allow talented researchers from various regions to exchange and collaborate more easily. Naila Murray │ We are very interested in researchers from Korea, Vietnam and Japan in addition to Europe. Since AI is a very extensive and multi-faceted field, it is important that researchers are exposed to various perspectives and approaches. Different cultures and histories from different regions will create new perspectives. We can approach the same problem from different ways. We are also more likely to identify new problems which we haven’t been aware of before. We want to solve various challenges by collaborating together. NAVER LABS Europe homepage>
Robots have been with us for a very long time–in literature, film, comics and even factories. It is still difficult to see them in our daily lives. When are they going to come into our daily life? What will they be like? We asked these questions to Professor Sangbae Kim, who serves as a technical consultant at NAVER LABS and leads MIT Biomimetics Robotics Lab. Q. Why is it still difficult to see robots helping people in daily life? Robots have all been inside factories–they performed simple predefined tasks in factories. Then, various algorithms have recently advanced at a rapid pace through artificial intelligence (AI) and deep learning. Based on such advances, many engineers are thinking more deeply about how we can bring robots outside factories for direct assistance and contact with people. It is going to happen in the not-so-distant future. Nevertheless, it is still very difficult to make robots be of help to people in daily life. Daily life settings are quite different from factories. In factories, robots are optimized for simple tasks such as picking up and moving things. With accurate location control alone, we were able to solve many problems in factories. However, when you look at what we are doing in our daily life, things are not so structured at all. For instance, we do the dishes differently every day. Dishes are very different from one another in terms of weight and threshold. If you want to do the dishes quickly, you often hit dishes with one another. You have to pay extra care to make sure that the dishes do not broken. Even if you want to make robots do such menial task, many things are missing in today’s robotic algorithms. In fact, the human brain or its function is truly amazing. But from our point of view, that is taken for granted. So we do not recognize how difficult things can be. For example, let’s say I put my hand in my pocket and take out a 500 won coin. Then, what have my fingers done, and how? It’s perplexing and also difficult to explain. The brain or somewhere in the nervous system has just done it on its own. We don’t recognize that this hidden process and function are required for robots. There is some wishful thinking that a robot works immediately like a human if you bring a robotic arm from the factory, put a camera on it, and run some machine learning on it. From my point of view, something big is missing, and we should keep looking into what it is. Sometimes when developing robotic technology, there are times when some things do not receive adequate attention or when some things receive undeserved attention. For instance, everyone’s mind was blown away after they saw Mini Cheetah backflipping. Truth be told, that is seen from a human’s point of view. Programmatically, it is actually tens of thousands times more difficult to make it not trip over and walk straight than to make it backflip. Many of these tasks for robots will eventually happen in daily life. Q. What are some projects and/or hot topics that you’ve been recently thinking about? What many professors are recently talking about is whether we should optimize existing technology for a certain task or we should just give up previous technology and replace it with machine learning. If we mix them together, the question then falls on its architecturing. When a task gets a little bit complex, a robot will not function as intended if there isn’t the right architecture. When you talk about AI today, most problems are solved by machine learning, deep learning and neural networks, and people are looking forward to advances in combining various neural networks. Many of the specialized neural networks will be created such as the AI which only recognizes faces and the AI which avoids only people. Most discussions are about how we combine them together and align them better for a more complex function. In the end, it is about whether we go with data driven or modelling driven, and how we can mix and combine them well. Q. What role will robots play in our life? Eventually, I think it will be for labor. Because smartphones are already doing almost everything that does not require labor. What are the things that smartphones cannot do? Although smartphones are already doing intelligent things such as delivering information and communicating with people, they do not have a motor. In other words, a smartphone cannot bring a cup of water to someone physically challenged. Of course, today’s robots still lack physical intelligence, so there is a lot more to research. Anyway, it will begin with services that do not require complex actions or interactions. That is why many people are now thinking about delivery. Not to the extent of robots knocking on doors, but to make it perform labor useful for people like delivering something from point A to point B. This is an attempt to connect technology to value. When a robot has an autonomous driving function, they attempt to connect this function to a delivery service where the robot delivers things to people. As such, it is important to identify how we connect a robot’s function to valuable aspects in our life. Q. With regard research in robot mobility, how far has it come along and how will it be advancing? We developed MIT Mini Cheetah as the last one in the Cheetah series where we researched quadruped robots. So, its performance is better than other previous ones. Its size is also small, so its ability to resist a shock is very good. This robot can do experiments hundred times in a day. Furthermore, a robot should be designed extremely well to recognize a shock without any sensor, and Mini Cheetah is also great in this regard. Recently, we have been developing an algorithm that maintains balance while controlling a force in accordance with directional instructions. When we applied this algorithm, Mini Cheetah began to show the outstanding movements which have been difficult to imagine before in other robots. It is like a superhero in the robot community. You can drop it from 2 meters above ground, and it can take 10 steps in 1 second. Mini Cheetah can perform movements that other robots have found difficult to perform. I think these technological elements in Mini Cheetah can be applied to biped robots in the future. They will be able to walk around while using arms in places like construction sites. One of the advantages with biped robots is that it is taller. While quadruped robots should get bigger in size to pick up something from a bookshelf or ledge, biped robots are smaller and can perform similar movements like humans. Although it is still difficult and dangerous to bring biped robots into our daily life, I think it is the ultimate direction and many technological elements in Mini Cheetah will be used for that. I think research for robots equipped with functions that can actually help people should continue for years to come. Related Articles > “AI for Robotics,” A Global Workshop on the Future of AI and Robots MIT Mini-Cheetah Workshop Professor Kim Sangbae, a Global Robotics Engineer, joins NAVER LABS as Technical Consultant
Background Large indoor spaces like shopping malls and department stores are crowded with many people as well as stores stretching one after another in a complex structure. It is easy to get lost even when you are looking for stores you have been to before. Hence, directions via augmented reality (AR) navigation can be most useful for such spaces. To provide stable and reliable AR navigation in indoor spaces, it is first and foremost essential that users’ location information be accurately identified. A variety of unexpected variables in the actual environment that act as hurdles need to be overcome technologically. Many people crowded in front of a famous restaurant become noise to machines when determining the location, and lights in strong contrast and rapidly changing event structures also interfere with identifying the location due to their changing visual characteristics. Our newly developed technology is showcased as an indoor AR navigation demo for the multi-floor shopping center of Hyundai Department Store–Pangyo. Demonstrating accurate and stable AR navigation in the real environment is the goal for which we challenged ourselves so that we can overcome technological limitations. a multi-floor demo at Hyundai Department Store–Pangyo (original video >) Challenges 1. Integration of complex spatial data such as multiple floors and stairs Visual localization (VL), which is used importantly in indoor AR navigation, begins with spatial information collected by Mapping Robot M1X. However, to provide seamless services for stairs, multiple floors and narrow hallways in the real environment, it is necessary to have spatial information that integrates complex spaces to which a mapping robot cannot travel. To do so, we applied COMET, an in-house developed backpack-type mapping equipment, along with M1X and successfully linked vertical data in outdoor and multi-floor spaces for Hyundai Department Store–Pangyo. Indoor and outdoor 3D point cloud around Hyundai Department Store–Pangyo created by M1X and COMET 2. Location correction in real time while the user is moving The visual localization (VL) technology of NAVER LABS, which can accurately identify the current location even in the indoor space where GPS cannot work, has the highest level of precision. In situations where the user keeps moving, however, it is difficult to stably augment AR content with VL alone due to slight location deviations occurring in real time. As the first method to overcome this problem, we have applied visual-inertial odometry (VIO), which tracks the location in real time by analyzing sensors and image information, in addition to VL. Nevertheless, the content often shakes and lags several meters compared to real time due to the camera’s rapid movements, network delays and temporary VL/VIO deviations. NAVER LABS has solved such problems with the in-house developed real-time camera pose tracking. It uses location correction algorithm for real-time precision positioning, which can augment and track content in a stable manner at an accurate location while the user is moving by combining various filtering logics, and a location prediction technique which analyzes previous traveling routes and calculates the current location. Content unstable in VL/VIO (left) has become augmented in a stable manner with the location correction algorithm (right) 3. Ensuring that AR navigation moves seamlessly between floors NAVER LABS has completed a multi-floor navigation scenario, where the user moves between various floors in an indoor space, for the first time with the demo at Hyundai Department Store–Pangyo. It was a challenging project that required us to identify not only spatial data for all floors, but also the precise location on the floor where the user is and link them with the scenario. This may be easy for humans, but not machines. To connect directions seamlessly through multiple floors, which includes getting on and off the escalator, we needed various location-based technologies. Our AR navigation identifies the current location with VL, tracks the location in real time and determines the user’s surrounding environment with camera pose tracking and visual object tracking (VOT), and augments content at an accurate location. Directions that naturally guide you through the escalator on 1F (left) to B1 (right) 4. Connecting AR content and real space by using 3D maps AR can expand spaces into its interface, and lead to more diverse and useful services along with navigations. To make it easier to link with more diverse content, NAVER LABS has developed a location-based AR authoring system mediated by a 3D map made from a mapping robot. As the system has the same coordinates as real spaces, content appears at an accurate location if you set the content on the 3D map. Not only can it precisely correct location values, but specific content parameters and interactions can easily apply to real spaces. AR content is augmented at the location of a real space corresponding to the 3D map. Without going outside in person, you can easily browse the 3D map and augment at a location you want, which makes it possible to use a wide range of scenarios. The Hyundai Department Store–Pangyo demo also applies various possible scenarios such as interactively opening a store’s menu and promotional information. Various scenarios applied to the demo - Examples of linking with store information, product coupons and promotional content Next steps Advanced technology by demonstrating in everyday life environments Our technological know-how that we have acquired from demonstrations in real everyday spaces and projects that we have newly identified and learned from play a key role in rapidly advancing the technology of NAVER LABS. NAVER LABS has established a pipeline across AR technologies, which includes not only a variety of location-based technologies such as VL for mapping, camera pose tracking and VOT, but also our internal rendering and content production technologies. Furthermore, we are packaging and testing the technologies we have developed in real-life environments including Hyundai Department Store–Pangyo. In the near future, new location-based services will be provided in various daily life spaces. Our goal is to prepare technologies so that more diverse services can be provided in actual spaces. We will connect indoor and outdoor spatial data seamlessly, and further expand our research and demonstration in more spaces relevant to daily life.
A place preparing for the future should also ruminate retrospectively. Here is a summary of NAVER LABS’ defining moments in 2019, which had been challenging but worth the effort. CES 2019 & the World’s First Demonstration of a 5G Brainless Robot It was around this time last year. We had our hands full preparing for our booth at CES 2019. It was also the first CES booth for NAVER. We were on edge to reveal a range of achievements that we had researched and accomplished to people from all over the world. However, we had an ace up our sleeve. It was our demonstration of NAVER LABS’ 5G brainless robot. We may now be familiar with 5G, but a year ago, it had not yet been commercialized. Back then, it was unclear who would be able to control high-performance robots built upon the ultra-low latency of 5G. The successful demonstration of the world’s first 5G brainless robot technology clearly caught the eye of the show that we had to double our previous demonstration schedule. The fascinating brainless technology unveiled this day is also the key solution to the cloud-based robot service that we continue to further refine and innovate. Other technological demonstrations were also highly successful. A wide array of technologies garnered great attention, such as the uniquely original hybrid HD mapping for autonomous vehicles, the natural autonomous driving robot platform of AROUND that is based on the map cloud and deep reinforcement learning without any laser scanners, and the three-dimensional augmented reality head up display AHEAD. “I must say I was most impressed by the robots of NAVER LABS at CES 2019.” (Professor Dennis Hong of UCLA during a press interview) New CEO Sangok Seok As soon as we returned from CES, a significant change was made to NAVER LABS; Sangok Seok, leader of NAVER LABS’ robotics group, was appointed as the new CEO. There were countless articles in the vein of “What is happening at NAVER LABS?” The company appeared to be in a crisis, and in fact, it was. It is not easy for any organization to replace its current leader. Now looking back, it’s safe to say that although time went fast, all was and is well! The new CEO did a great job and all the members of NAVER LABS remained passionate. It may be better to leave a mark in our DNA of searching for shortcuts in the midst of chaos arising from change rather than being buried in gradual stagnation. With all that said, we would like to introduce our chief executive officer CEO Sangok Seok, who began to lead NAVER LABS this year. His research on the soft autonomous robot called Meshworm and the running robot MIT Cheetah conducted during his years as a PhD student in mechanical engineering drew great attention, and his paper on MIT Cheetah, in particular, won the award for the best paper at the 2016 IEEE/ASME Transactions on Mechatronics. Since joining NAVER in September 2015, he has focused on the popularization of robot platforms that naturally provide information and services to people in their daily lives, while also carrying out a wide range of advanced researches and making an investment on the internalization of cutting-edge technologies. “NAVER LABS is comprised of world-class talents. Together, we will strive to upgrade our technology platforms with the most innovative and natural interfaces.” (CEO Sangok Seok during an official interview for his appointment as CEO) Vision of a Future City, A-CITY Last June, we held a conference to present the visions and roadmap of NAVER LABS where CEO Sangok Seok offered the intriguing concept of “autonomous space.” He emphasized that a space containing information, services and products will move on its own and create new connections within a city via space and mobility rather than separating them. Just like the advent of elevators had expanded urban space vertically for citizens, the future of autonomous space will bring a whole new aspect into our lives. This vision of a future city is ‘A-CITY’ and is also the goal of the technological researches that are currently conducted at NAVER LABS. In fact, A-CITY is simply a concept as well as an image. However, this kind of a futuristic vision is also essential for a company that only needs to study and research technologies. To be specific, we need the next stage describing the excellence of our technologies. This is because the ultimate direction to which the so-called future technologies need to be headed is not in labs or showrooms, but our daily lives. In this era, it is the duty of those who conduct research to make known what changes we may imagine and what kind of lives we may expect to live through such technologies. Therefore, this day's presentation ended with the following phrase. “Technology by us, imagination by all!” (CEO Sangok Seok during a press conference) NAVER LABS Scouts Professor Sangbae Kim of MIT as Technical Consultant Professor Sangbae Kim, a world-class robot engineer, joined NAVER LABS as a technical consultant. In fact, the MIT Biomimetic Robotics Lab led by Professor Sangbae Kim had continuously worked closely with NAVER LABS. Both the MIT Cheetah III and MIT Mini Cheetah had been the result of the industrial-education cooperation between MIT and NAVER LABS. We anticipate the commercialization of service robots to take place in the following order of stages: wheel-based autonomous robot → four-legged walking robot → robot capable of safely providing services with robot arms and hands. The industrial-education cooperation with NAVER LABS led by Professor Sangbae Kim is, therefore, a kind of advance research preparing for the stage following the wheel-based robot. (NAVER LABS has also continued to engage in the industrial-education cooperation with Professor Yong-jae Kim of Korea Tech for the robot arm AMBIDEX in a similar vein.) Another brilliant plan is underway with Professor Sangbae Kim. We have disclosed our plan to manufacture a small number of the MIT Mini Cheetah and distribute them among leading researchers throughout the world. As widely known, the MIT Mini Cheetah is one of the best performing four-legged walking robots in the world. It is famous for its dynamic motion enough to perform a backflip, natural walks on complex terrain, the fastest speed among the existing motor-based four-legged small robots. It serves as an excellent platform to allow those who have research interests in the AI or robotics to personally test hardware and study algorithms. These results, which will provide an opportunity to unfold the potential underlying the robot mobility again and further drive the joint research outcome, will be presented at the MIT Mini Cheetah Workshop at this year’s International Conference on Intelligent Robots and Systems (IROS 2020). “There is no need for robots to do what people are already good at. There are other areas of strength for robots.” (Professor Sangbae Kim during a NAVER seminar presentation) COMET & M1X M1 is NAVER’s first-ever robot. This robot creates high-precision indoor maps and plays a key role for the starting point of NAVER LABS’ technological roadmap. The spatial data created by M1 is used for VL, AR, self-updating maps and service robots among many others. However, M1 is having a good rest this year. It has handed over a significant portion of its roles to the follow-up version M1X. M1X obtains data faster and its ability to utilize data has further widened. (Although not intended, the cost of its production has also fallen.) However, many areas still remain inaccessible even for M1X. Complex terrain such as stairs and sidewalks poses challenges for wheeled robots. That is what we have developed the backpack-type mapping device called COMET. We have also developed technologies that help data maintain its quality in a variety of environments and illumination levels, and that calibrate data collected while being shaken during the walking process. In the end, COMET that closely connects M1 for indoors and R1 for roads has resulted in the mapping triad for NAVER LABS. For now, engineers carry COMET on their back to perform the mapping. However, the aforesaid Cheetah robot will take over COMET’s role in the future; all of our plans are interconnected. “The most valuable gain from COMET may be that we are creating mapping device standards unique to NAVER LABS.” (Team Leader Sung yong Chung during a company interview) VL Advancement & AR Navigation Demonstration One of the technologies that NAVER LABS boasts of being the finest in the world is its visual localization (VL) technology. VL calculates very precise positions based on a single photograph. This technology is particularly useful indoors where the GPS is not applicable. The data obtained from the M1X and the COMET are used for VL. Another significant upgrade took place this year: R2D2 (Reliable and Repeatable Detectors and Descriptors for joint Sparse Keypoint Detection and Local Feature Extraction) technology of NAVER LABS Europe came first in the Long-Term Visual Localization challenge at CVPR 2019. Although VL technology has been primarily used for calculating positions for autonomous driving at NAVER LABS, other projects are also in progress. It is the development of indoor AR navigation. Following the COEX mall demonstration unveiled at CES 2019, we held another demonstration at Hyundai Department Store–Pangyo. As it was a department store, there were so many people (all constitute noises), varying lights across stores (all constitute technological hurdles), and even worse, it constituted multiple stories connected via escalators. This time, we focused more on creating natural connections between different floors. Certainly, in addition to VL, we had further enhanced the technology to analyze various sensor/image information and track location, calibrated errors during movement and network delays through the real-time camera pose tracking, and utilized the VOT and location-based AT authoring system for seamless augmentation of useful information, thereby successfully delivering more complete demonstration. “A lot of people are asking if it’s computer-generated, but it’s not.” (CEO Sangok Seok, describing the AR navigation demonstration at a press conference) 2000 km-long Road Layout Data Construction for Seoul and HD Map Dataset Distribution HD maps are key data for autonomous on-road driving technology. Therefore, NAVER LABS has engaged in the development of its own technology known as hybrid HD mapping. First, we extract road layout information from aerial photographs, and then organically combine the data collected by R1, our own mobile mapping system (MMS), while moving around the area to create HD maps. It is a highly useful solution for creating HD maps for large urban areas. We used our own technology to precisely extract the road layout data from aerial photographs. As a result, NAVER LABS completed establishing 2,000 km-long road layout data based on the aerial photographs of Seoul this year. This covers major roads with four lanes or more for autonomous driving vehicles to travel within Seoul. This year, NAVER LABS carried through another meaningful plan. We distributed the HD map dataset free of charge that we have created based on our hybrid HD mapping technology. This was to promote research on autonomous driving in Korea, and NAVER LABS was the first company to ever do so in the nation. After all, joining hands to drive the growth of necessary technologies will benefit everyone in the future. We believe that this attempt will not end in vain. There is also a sense of crisis as many companies throughout the world are rapidly making progress. "Our sense of mission is driving our commitment towards developing these technologies. We must have readiness or else we may have to hand the domestic data to foreign companies without any options.” (Senior Leader Jongyoon Peck during a press conference) 1784 Project Unveiled The 1784 project for the NAVER's second office building that had been in progress in confidence was showcased during a keynote speech at the DEVIEW 2019. This project is aimed at building the world's first robot-friendly building as its new headquarters by 2021. To this end, we are converging all the technologies that will drive the future of NAVER, such as 5G brainless robot technology, autonomous technology, AI and cloud. It is also the first reference space for A-CITY, NAVER LABS' vision of future cities. (Where shall we head next?) “With the 1784 project, we aim to realize the true first generation of service robots.” (CEO Sangok Seok during a keynote speech at the DEVIEW 2019) AROUND C Pilot A new robot with the AROUND platform is now available. AROUND C, which has been designed to provide pilot tests at the cafe located on the first floor of NAVER Green Factory, provides cafe delivery services. The main test fields were cloud-based control, self-driving algorithms optimized through deep learning technology, and natural human-robot interaction (HRI). For the first time, this version has been applied with the nonverbal communication approach through the gaze. On the day the official pilot test began, it enjoyed tremendous popularity among the children who were visiting NAVER Green Factory in groups. Although it posed an unexpected challenge for test engineers from day one, after all, the rugged paths of life are full of variables for both robots and engineers. The service went perfectly safe and seamless despite the children flocking after the robot. Robots will always be the friends of children, and so will children be friends of robots. "We sought to apply interaction design to AROUND C that is considerate to the people around in the same space while not being frustrating or awkward at the same time. (PDX Leader Seoktae Kim at NAVER DESIGN COLLOQUIUM ‘19) AI for Robotics Global Workshop When introducing the office of NAVER LABS Europe, it is often referred to as an ‘artificial intelligence lab located in the most beautiful place in the world.’ In late November, eleven great scholars in the fields of AI and robotics from all over the world gathered here in Grenoble, France, and held the ‘AI for Robotics Global Workshop’ under the theme of ‘How AI can help integrate robots into everyday life.’ Marc Pollefeys, a professor from the ETH Zurich Institute of Technology who presented how to automatically convert photographs into 3D models for the first time, Cordelia Schmid, a research director from the INRIA, who continues to garner attention as the next-generation leader in the scientific field of computer vision, Daniel Cremers, a professor at the Technical University of Munich widely known for SLAM, the core technology for autonomous driving, CEO Sangok Seok, and Professor Sangbae Kim all graced the occasion. The AI for Robotics Workshop was also the kick-start of the ‘Global AI Research Belt’ announced by NAVER. The Global AI Research Belt is an AI technology/human resources network that connects Europe and Asia to stand against GAFA (Google, Amazon, Facebook, and Apple) of the US and BATH (Baidu, Alibaba, Tencent, and Huawei) of China. “After all, it is the human that studies the AI. The first, second and third most important components required for the AI are talent, talent, and talent.” (CEO Sangok Seok during a keynote speech at the DEVIEW 2019) Although we have narrowed down the list to 10, there are other areas that are also worth to note. These include ALT project, ACROSS project, and AIRCART OPENKIT as well as its wheelchair version. For those who also find these fields interesting, please visit the website of NAVER LABS for further information. Starting with CES at the beginning of the year and ending with the AI for Robotics Global Workshop at the end of the year, the head and tail year appear to converge, clearly demonstrating the adage that “technology knows no borders.” In the year 2020, we will refuse to be bound by any limitations or boundaries, and continue to power our imagination that defy challenges through technology.
Serving refreshing drinks from the bartender to your table safe and sound might seem like a simple and menial task, but robots must be able to overcome numerous obstacles, understand their surroundings, and communicate their intentions. Moreover, even if they succeed in carrying out particular tasks, their services will not be sustainable if customers feel uncomfortable with robots altogether during the process. In order for more robot services to enter spaces of daily interaction and habitation, we need to first consider and identify the ways in which robots interact with humans. NAVER LABS developed AROUND C, a cafe delivery robot, to primarily identify the ways in which human-robot interaction (HRI) is conducted. A team of UX designers, robotics engineers and product designers for AROUND C defined the following principles for robots: Robots, first and foremost, must operate safely. - Must not cause physical harm, such as collisions. - Move along paths with the least risk of accidents. - Be able to stop immediately when incident occurs. - Give warning signals in case of danger. Robots shall provide the most efficient service. - Move at and in the most economical speed and path. - Be able to comprehend quickly without mistakes or difficulties . Robots must be approachable, human-friendly and safe when in the same space as people. - Keep a safe distance from people so as to not feel threatening. - Be able to let users know their operating status and the direction of progress. - Operate smoothly and refrain from behaviors that cause anxiety and discomfort. - Be made with an approachable and human-friendly appearance and not cause fear. - Communicate calmly and smoothly. Robots shall communicate naturally to avoid the sense of foreignness. - Be made to operate in harmony with everyday space due to the closer resemblance to machines than humans. - Communicate nonverbally via facial expressions, sounds, lights, etc. instead of verbally via speech and text. - Refrain from being overly sociable, and interact only when needed so as not to disturb humans. - Use expressions that are not irritating even after repeatedly used. Robots, in short, shall operate naturally without being dangerous Human Friendly Navigation via Reinforcement Learning Robots must move safely and efficiently even in complex spaces of daily interaction and habitation with many people and obstacles. Moreover, people sharing the same space with robots should feel comfortable with robots. AROUND C is equipped with the technology that uses reinforcement learning to move naturally while maintaining psychologically safe distances (proximity) with humans. When an HRI designer selects the most ideal speed and route according to certain situations, AROUND C can apply its deep learning technology for optimized autonomous driving. “The autonomous driving method of AROUND C, based on reinforcement learning reflecting people’s preferences, moves at a speed similar to that of actual people walking in real-world environments and smoothly even when avoiding obstacles, and slows down to keep a safe distance when it detects a person nearby. Even if it can avoid obstacles with greater probability, it should still not appear dangerous to people.” (Choi Jin-yeong, Robotics) Robots shall be able to convey their intentions in the most natural way Nonverbal Communication via Facial Expressions, Sounds and Lights AROUND C uses a fluid and smooth method of nonverbal communication via facial expressions, sounds and lights. The core aspect of robot design is more about robots being able to engage in appropriate and comfortable communication and less about robots being conversational objects. AROUND C can express various modes and states via facial expressions such as by its gaze to indicate its direction of movement which is essential for its services. To do this, NAVER LABS has chosen graphics of points and lines instead of just two eyeballs so that the method of expression and communication of AROUND C can smoothly and naturally change in many ways. “AROUND C is designed with a calm and polite personality. Rather than actively talking or expressing its emotions, it moves around with a calm expression and politely says ‘Thank you’ after delivering a drink. The facial expressions, sounds and lights that AROUND C can make reveal its personality, and we are still working to make it more approachable for those who stay away from robots.” (Cha Se-jin, UX) Creating a new standard for service robots AROUND C was designed as a cafe delivery robot, but it is not intended for commercial use as a service robot in cafes; rather, AROUND C is a pilot model for empirical research on HRI currently being conducted by NAVER LABS. The goal of this experiment is to test the hypotheses of various user interaction scenarios in a real environment, and to obtain optimal algorithms. Of course, it is also important to see the actual–unexpected–results in real-world applications directly since the real world is full of variables, especially for robots.
The role of AI is essential for robots to help us in our daily lives. Robots that are capable of interacting with humans in real-world physical spaces serve as one of the most important media to bridge the gap towards the future. With the focus on the exciting subject of artificial intelligence for robotics, eleven international scholars in the fields of AI and robotics gathered in NAVER LABS Europe in Grenoble, France. Hosted by NAVER LABS Europe, the “AI for Robotics” workshop was attended by world-class researchers in fields of computer/3D vision and robotics to continue their discussion on the topic of “how AI can help integrate robots into everyday life” over the course of two days from November 28 to 29. During the workshop, CEO Sangok Seok introduced a wide range of technologies, including robotics, autonomous driving, AI and HD maps, which NAVER, an online platform, is currently researching to provide valuable services to users in real-world physical environments. NAVER LABS successfully demonstrated the world's first 5G brainless robot this year, and has continuously unveiled a number of robot technologies to provide useful services in our daily lives, including the “AROUND” platform that NAVER developed independently with the aim of popularizing service robots, the “AMBIDEX” robot arm that is capable of safe interaction with people, and the “ALT” platform for autonomous robots on roads. Sangbae Kim, a professor at MIT as well as the technical advisor for NAVER LABS, also participated in the workshop. Kim suggested that service robots used to provide services for homes, such as delivery services as well as caring for the elderly and infirm, should have "physical intelligence" to physically interact with people, and shared the design paradigm to achieve this end. In addition, Marc Pollefeys, a professor at ETH Zurich who was the first to propose a way for converting photographs into 3D models, as well as Cordelia Schmid, research director at INRIA who is taking center stage as the next generation leader in the field of computer vision, and Daniel Cremers, a professor at TU Munich who is famous for the core technology of simultaneous localization and mapping (SLAM) for autonomous systems, among many others graced the occasion and shared their thoughts on the future of AI and robots. "For robots to integrate into our daily lives, we need to teach them to learn and operate on their own. I look forward to seeing that at this workshop, which will be attended by many experts in AI and robotics to exchange and cooperate, contributing towards advancing the future a little faster.” - Senior Researcher Martin Humanberger of NAVER LABS. “Although robot and AI technologies continue to evolve and develop, integrating the two different areas of technologies still remains a difficult challenge for us. We can mark this occasion as a milestone in that innovations will be created from the discourses carried out during this workshop and that these latest discussions are led by a Korean IT company." - Sangbae Kim, a professor at MIT and technical advisor for NAVER LABS. Through this workshop, we will share our knowledge and experience on solutions to a wide range of problems that robots face during their operation in the continuously changing real-world environments based on AI, and on ways to draw more natural interactions between humans and robots so as to promote the growth of future AI and robotics technologies. The “AI for Robotics” workshop also serves as the first starting point for the “Global AI R&D Belt” of NAVER, which was unveiled during the keynote at the recent DEVIEW 2019. Prior to the workshop on the 25th, CEO Sangok Seok also introduced NAVER’s strategy on the global AI research belt to French startups at Station F in Paris. NAVER LABS will provide support for leading researchers throughout the world to easily exchange and cooperate through our global technology and research network linking Asia and Europe, and continuous investment will be made to foster new talents through this network.
AHEAD is a 3D AR HUD (head-up display) developed by NAVER LABS. It is an HUD unlike any other, displaying information such as automotive navigation, forward collision warning (FCW) and lane departure warning (LDW) on the windshield optically and seamlessly. Existing HUDs can disrupt drivers’ concentration as the principal focus of what is displayed and what actually appears on the road differ. When focusing on the HUD information image, the road appears blurry, and vice versa. To tackle this problem, AHEAD applied 3D optical technology to provide information in convergence with the road for all ranges of drivers’ field of view, both near and far. Early this year, AHEAD drew much attention by being awarded at the CES 2019 Innovation Awards. As announced at the time, AHEAD has been promoting a development project to link localization technology of NAVER LABS with advanced driver-assistance systems (ADAS) using data from HD maps. The result of this project was verified through an actual test drive demonstration. AHEAD, self-developed by the NAVER LABS Autonomous Driving Team, provides driver-assistance information, such as lane-level navigation, guidance and warning, and functions for safe driving, such as FCW and LDW, based on localization results using HD maps of roads. Roads are very special, and at the same time, ordinary. In order to parse the nuances of such discrepancy and gain a better understanding of roads for the future of road spaces, NAVER LABS is continuing and furthering its research on a wide range of technologies.
The MIT Mini Cheetah is a quadrupedal robot developed through a academia-industry collaboration between Professor Sangbae Kim’s MIT Biomimetic Robotics Lab and NAVER LABS. This robot has since come into the limelight across the world. Unlike industrial robots that are vulnerable to impact, this robot is capable of dynamic actions as it is designed based on force control. It can walk naturally in various environments, such as on uneven, soft and hard floors. This robot can recover balance immediately after an external impact and boasts the fastest speed among electric motor-based quadrupedal robots currently in existence. The fact that it is capable of backflips despite being a mini quadrupedal robot is also drawing much attention. Early this year, Professor Kim expressed his intention through the media to produce and distribute multiple MIT Mini Cheetahs. For this, he is preparing to hold the MIT Mini-Cheetah Workshop (MCW) together with NAVER/NAVER LABS. This workshop is aimed at providing robotics/AI engineers with an opportunity to directly test robot hardware and study new algorithms, thereby rediscovering and sharing the hidden potentials of quadrupedal robots. The results of this workshop will be unveiled at the IROS 2020, an international conference on intelligent robots and systems. ▶ Website introducing the MIT Mini-Cheetah Workshop ▶ Website of the MIT Biomimetic Robotics Lab
NAVER LABS’s paper “Drop to Adapt: Learning Discriminative Features for Unsupervised Domain Adaptation” has been accepted to the International Conference on Computer Vision (ICCV) 2019, an international academic conference of the highest prestige in computer vision and pattern recognition. The authors of this paper are Researcher Nam-il Kim of the NAVER LABS Autonomous Driving Group and two interns. This paper is about a study on domain adaptation to utilize the data from virtual environments as real data for deep-learning. It proposes a methodology that is simpler than existing methods, while delivering stronger performance that can be widely applied to existing image-based models. Domain adaptation to successfully apply a deep-learning model trained using existing data (e.g. virtual data, camera A) to a new set of data (e.g. actual data, camera B) is a subject of study drawing attention in various fields. In particular, the necessity of this method is being emphasized for various autonomous driving and robotics applications, such as not only when the platform sensor is changed, but also when the virtual simulator is applied in an environment where it is difficult to obtain real data. This paper proposes a method of changing the feature space formed with existing data (source domain) to ensure that new data (target domain) are successfully categorized on the basis of the machine learning theory and mathematical modeling without the tagging information of the target domain in the feature space. The proposed method can be applied to a range of deep-learning models, and it delivered excellent performance regardless of the size of the dataset. Download the paper > In addition, NAVER LABS Europe made the study result public through a poster session and workshop. It is expected that the results of this paper will be utilized in diverse computer vision-related fields and services. Learning with Average Precision: Training Image Retrieval with a Listwise Loss (Jérome REVAUD, Jon Almazan, Rafael SAMPAIO DE REZENDE, Cesar De Souza) Read more > Fine-Grained Action Retrieval through Multiple Parts-of-Speech Embeddings (Michael Wray, Diane Larlus, Gabriela Csurka Khedari, Dima Damen) Read more > Moulding Humans: Non-parametric 3D Human Shape Estimation from Single Images (Valentin Gabeur, Jean-Sébastien Franco, Xavier Martin, Cordelia Schmid, Grégory Rogez) Read more > SLAMANTIC-Leveraging Semantics to Improve VSLAM in Dynamic Environments (Martin Humenberger) Read more >
At DEVIEW 2019, NAVER LABS gave a keynote presentation and unveiled its company roadmap. Starting by introducing the MIT Mini Cheetah workshop, the keynote presentation introduced and discussed the company’s progress in various researches, including HR mapping, visual localization (VL), indoor AR navigation, 5G brainless robot, the service robot platform, autonomous driving, and human-robot interaction (HRI), in addition to new announcements. The following is a summary of the keynote largely focusing on the new announcements for those who could make it to the event. A-CITY and the 1784 Project We made it clear that the ultimate goal of our projects was to create A-CITY. A-CITY represents an urban area where each space in the city center is tightly connected to each other through various autonomous machines, with services such as delivery and logistics being fully automated. A project integral to the roadmap of A-CITY was first made public during this keynote. The “1784 Project” will convert the second company building of NAVER into a tech convergence building. Each and every technology that will shape the future of NAVER concerning robots, autonomous machines, AI, cloud computing and the like will be integrated and connected. “This place will be filled with the most human-friendly robots developed by NAVER LABS, and the building will be designed to contain optimal infrastructure to be a robot-friendly building in order to maximize the robots’ functions and the service experience,” said CEO Sangok Seok during his keynote speech. He further emphasized that “the goal [of NAVER LABS] is to implement the true first generation of service robots in this space.” The second company building of NAVER will serve as the most effective accelerator for the technical advancement of NAVER LABS. Free Distribution of HD Map Data To delineate the first stage of the roadmap for A-CITY, it begins with creating high precision maps for machines to use anytime, anywhere. This keynote speech was no different. NAVER LABS creates high-precision maps with the M1 mapping robot for indoor space, COMET for stairs and complex terrains, and hybrid HD mapping technologies that converge R1 with aerial photogrammetric data for roads. There was also a new announcement relevant to this: a plan to distribute the data of HD maps for roads free of charge, the first among private companies in Korea to do so. NAVER LABS will provide the dataset created with its hybrid HD mapping technology to domestic startups and researchers studying autonomous machines, and plans to continuously expand on the regions and types of data. ALT Platform, Robots on the Road Without a doubt, HD maps for roads are also crucial to the autonomous driving technologies of NAVER LABS. Since its commencement in 2016, NAVER LABS’s research on autonomous driving has been wrought with many questions. NAVER LABS’s answer to all those questions is the ALT platform. ALT is a platform for autonomous robots on the road. It can be explained as the road-version of “AROUND,” an indoor autonomous robot service platform that NAVER LABS had previously developed. Autonomous service technologies for roads and the high-precision data will be packaged as ALT-0, the general basis for this platform, and a custom-made version combined with various service scenarios through pilot tests will be demonstrated. Our ultimate goal is to integrate it with the AROUND platform, which will be connected to each and every space. AROUND and ALT are the key platforms for machines connecting all spaces within A-CITY. Synergy from Cooperation, and Nurturing Talents Even though this keynote was held to unfold the progress of NAVER LABS’s technology roadmap, it also contained two important keywords: cooperation and talent. The demonstration of the MIT Mini Cheetah at the outset of the keynote was aimed at showing the hidden potential of the four-legged robot hardware unlocked by AI researchers across the world, while the distribution of the HD map dataset was decidedly made public for the mutual growth of domestic autonomous technologies. The “1784 Project” will converge future technologies of each organization within NAVER to realize new potentials. As such, we are maintaining our focus on the synergy created by technological connections. We are also well aware that nurturing talent is crucial to maximizing synergy. A new plan by NAVER called the Global AI Research Belt was presented at the end of the keynote. The Global AI Research Belt refers to a new technological network of connecting AI researchers of Korea/Japan/France/Southeast Asia with startups and agencies. The plan is to support leading researchers worldwide so that they can easily interact and cooperate with each other and to continue making investments in nurturing new talents through this network. By the end of this coming November, the Global AI Research Belt is scheduled to take off at the “AI for Robotics” workshop, where leading scholars in the fields of AI and robotics will gather at NAVER LABS Europe in Grenoble, France to engage in in-depth discussions. It is true that American and Chinese companies are currently leading the trend in AI development, but our goal is to create a new tide that is comparable if not competitive. CEO Sangok Seok wrapped up the keynote expressing his expectations saying, “I look forward to seeing Korea's outstanding talents freely crossing borders within the Global AI Research Belt to continue technological cooperation and seize the opportunity to achieve further technological growth.”
We are offering HD map dataset and localization dataset produced by NAVER LABS free of charge. Go to NAVER LABS HD MAP Dataset HD maps are the core data for autonomous driving technology. That is why some even liken HD maps to the “brain of autonomous vehicles,” or even an “extra sensor.” Open data for autonomous driving, disseminated in diverse forms by universities, enterprises and national research institutes, is very precious information. Utilizing such data enables anyone to quickly develop and verify algorithms. However, precision mapping data has thus far been difficult to obtain anywhere. Thus, in order to promote mutual success alongside academia and startups performing similar research, NAVER LABS has decided to provide a portion of its precision data free of charge. “There are numerous institutions researching autonomous driving based on HD mapping. Having more open data to use is always better, though it continues to be limited. I believe that the data to be made public will be helpful.” (Sujung Kim, Mapping & Localization) HD maps to be released to the public were produced with the hybrid HD mapping technology developed in-house by NAVER LABS for rapid production of large-scale precision maps of urban areas. First, the road layout data is extracted from aerial photographs, then the R1 mobile mapping system (MMS) unit produced by NAVER LABS traverses the corresponding area to organically combine the data and produce an HD map. Alongside the mapping data, the localization dataset which is to be released is also crucial data for the development of technology to accurately match location and positioning data in urban areas. “It is crucial that the localization values of data match the absolute coordinates. Because the dataset comprises HD maps of urban areas like Pangyo and Sangam rather than simple highway data, it is also suitable for technological research in precision localization for complex driving environments. (Jinhan Lee, Mapping & Localization) The distribution of the HD map dataset free of charge will be a first for a private company in Korea. NAVER LABS will not only release its data to the public, but also plans to continue to expand the areas provided in the future. “Although we tend to think of our role as focusing only on the technology itself, the fact is we also feel a sense of mission when looking at it from a perspective of global trends. There are already many enterprises with a similar view in Korea. It has also led to lively social discussion on the topic. I have been thinking about what other roles we can take on for more contributions. I hope that this dataset release becomes one more catalyst for the rapid growth of the many autonomous driving researchers and startups in Korea. (Jongyoon Peck, Autonomous Driving) Go to NAVER LABS HD MAP Dataset
The kind of talent NAVER LABS seeks is a passionate self-motivated team player. Perhaps the term, “a self-motivated team player,” is an oxymoron. Even still, we have continued to venture out and the culture continues to snowball. We present the stories of experts in various fields cooperating without boundaries, making decisions on their own and taking on challenges together. Here is how to work at NAVER LABS. Digitalizing spatial data, or “high-precision mapping,” is a task of utmost importance for NAVER LABS, and it is from such a task that the technologies of NAVER LABS emerge. NAVER LABS’s COMET project aims to research mapping technologies in complex topography difficult for mapping robots and mobile mapping system (MMS) vehicles to access, and to develop and set the standards for NAVER LABS mapping devices. Prior to this project, there were many attempts and failures. Surely, we can expect to bear fruit after overcoming such failures, albeit difficult reaping the benefits of hard work. Certain conditions and situations do not allow that to happen easily, and it is easy to become weary since we are human after all. That is why we have grown more curious about the COMET Team. We have listened to their stories on the events that they have encountered and experienced. Q. What kind of project is COMET? (Eungyo Jung | TL) Development of mapping devices in the past concerned mostly fixed types or was limited to specific topography. However, COMET was predicated on the premise that data collection needs to be enabled for mapping devices regardless of the geographic features not only for indoor environments and roads which are standardized places, but also uneven sidewalks, stairs and winding hiking trails. High-precision data must be able to be collected no matter what the topographic features are, which is why we began first with a backpack-type design. The project name definitively signifies the project as a whole. (Seong-jun Lee | PM) That’s why we named this project, COMET. In space, there are not just planets that move according to specific orbits, but also comets that penetrate these orbits. The COMET project takes its place in between the two technologies of NAVER LABS: M1, the indoor mapping robot, and R1, the mobile road mapping system. It connects all the spaces that have been tough to cover up until now. Let’s paint a stroke resembling a comet (Sungyong Chung | Hardware/Firmware Design) There were actually several other projects with different concepts but they were put to a halt because unexpected variables both internally and externally, There was even a project that was near completion. All the drive and passion that I had nearly disappeared at the time, but I remember Seong-jun proposing the COMET project by saying, "Let's paint a stroke resembling a comet in the company's history for one last time." That’s the meaning behind the name, don’t you agree? “The core concept of COMET is seamlessly connecting the areas that have been difficult to access with high-precision mapping devices in the past. There is already a sufficient number of solutions for roads or indoor sidewalks where the movement environment is relatively uniform. However, there are still many complex topographic features and areas for which it is tough to make high-precision maps. Being able to seamlessly connect the spatial data in those areas through COMET is the greatest accomplishment." Q. How did your team utilize failures? A process is needed to make even a failure an asset (Seong-jun Lee | PM) We've learned many things from various attempts and failures prior to COMET, and they’ve become assets for the project. Even if a project is halted, everything that has been accumulated should not disappear. That's why we wanted to establish a process in order to turn experiences and know-how we’ve gained through each project into assets. An overarching framework was set up first, and each step proceeded in design sprints. The fact that we could visually see how far we've come so far has also helped us greatly. Every ending is connected to a new beginning (JungHoon Cheon | Programming/Hardware Design) The information of all projects that have been carried out is organized and made public. I was also able to raise the speed of development by referring to previous solutions. I considered such an act of organization critical because I believed that COMET is not the end but a step that leads to the next project. We put much consideration into technologies that may be utilized in future projects. For instance, we design efficient collection protocols for various sensor data based on an assumption that they would be utilized in the next projects, or apply the firmware update feature of the circuit board to prepare for expandability in advance. What happens when a process works (Sungyong Chung | Hardware/Firmware Design) I, as well, actually thought that COMET wasn't going to be finished. It wasn't because of technical difficulties; I just believed that it would be tough for this project to end in a stable manner in the midst of changes in the company leadership and roadmap this year. But, thanks to the experiences accumulated up until now, a solid process that was made based on them began to work, and we were able to reach completion faster than anyone's expectation. It was really an incredible reduction of time. Of course, each day was a challenge and a crisis during development. Going beyond the concept and boundaries of the “person in charge” (JungHoon Cheon | Programming/Hardware Design) Of course, there are objectives that each individual must achieve, and these things are normally quite clear. But individuals only accomplishing their own goals doesn't mean that the project will go well. Instead of merely waiting for the responsibilities of or completion by other people in charge, we led things ourselves if it was deemed necessary in order to think and communicate with one another. The fact that we can visit experts in different areas regardless of which teams they are on and comfortably discuss issues to solve is definitely a strength of the organizational culture at NAVER LABS. We were able to resolve our concerns any and every time they occurred thanks to an atmosphere in which feedback can be shared easily with anyone at any time. A real expert-like cooperation among experts (Munyong Choi | GPS Hardware Design) There was a time when the GPS reception of COMET came out worse than expected. When that happens, experts in hardware, software and GPS algorithm all come together. Based on each of these areas of their expertise, we observe and discuss in a variety of ways, and find effective countermeasures, after which Mechanical R&D engineers make the necessary applications right away. As a result, we were able to raise the performance level up to reach our expectations. I get chills just observing it! The level of expertise of the entire team has been enhanced through cooperation without having to set boundaries among each other’s tasks. You, too, code, and I, too, plan (Sungyong Chung | Hardware/Firmware design) In reality, we don’t divide up areas among one another, and instead freely go beyond and cross domains. Persons in charge are defined, but that doesn’t mean that development and decision-making are only performed by those persons in charge. If deemed necessary, anyone can draw a circuit, write code, design an equipment or establish a project plan. (Jaeryang Lee | Mechanical engineering design) Of course, realistically speaking, disputes can’t not arise. Sometimes there are really heated debates when there are differences in opinions, and we frequently become temperamental and get into arguments (not me, however). But, in the end, we all reach a better conclusion every time. The fact that anyone can freely express his or her opinions and get into a debate is an extremely important element contributing to the degree of completeness of a project because, at the end of the day, they are outstanding experts in their respective fields. “What would be a strength of free communication among team members with expertise? It is the fact that individuals’ responsible domains overlap, and soon enough, leading to the disappearance of the boundaries of “my problem” and “your problem.” I think it was possible because of the efforts to genuinely acknowledge the expertise of one another, and to be interested in and understand other people’s fields. Now, we jokingly say that, even if we decide for the next project what each person is going to be in charge of by a game of ghost leg (ladder lottery), the project will indubitably go smoothly.” Q. What are the goals heading forth? To set the standards for mapping devices that is applicable to any shape or form (Eungyo Jung | TL) As mentioned earlier, the COMET project aimed to collect high-precision spatial data in a variety of topographic features, and having successfully made it possible is the biggest achievement. Through this project, we experienced and solved a wide spectrum of issues and side effects that came from combinations of sensors. Based on such information and know-how, we are preparing to standardize NAVER LABS’s mapping devices. That’s how we will be able to quickly and efficiently deal with many other mapping projects to come in the future. (Seongjun Lee | PM) Actually, COMET is not an end in itself. We are going to increase the actual hours and environment for operation to test it, and find new points to improve on. Through this process, we will be able to develop it into an expandable system that may be applied to more diverse environments and machines. (Jaeryang Lee | Mechanical engineering design) At first, things were tough even from the initial concept stage because of the fact that we have to develop a type we had never tried before. But, now, we continue to review new materials and structures, and conduct tests for upgrades. Please look forward to COMET which will continue to see new version updates. Preparing a solid foundation not to lose assets of the past (Sungyong Chung | Hardware/Firmware design) I think the biggest element we’ve ultimately gained from COMET is the fact that we are making our own mapping device standards. From now on, all the mapping devices that will be developed by NAVER LABS will be based on COMET regardless of the form or purpose. Now, we do away with the method in which concepts are newly designed every time the direction of a project changes, and instead, mapping devices of the most effective means can be produced while not losing any of the assets we’ve accumulated thus far. I believe that the failures of the past were necessary for these outcomes.
Robots, Autonomous Driving Vehicles, and Maps There is a kind of data that you must have for robots to be able to move about in our everyday spaces and for autonomous driving vehicles to be able to move about safely on our roadways. It is “maps.” Actually, these maps we’re talking about are a bit different than the map apps or navigation systems we are familiar with. They are machine readable, 3D/HD maps. These maps play an extremely important role for robots or autonomous driving vehicles. This is because robots or autonomous driving vehicles rely on these maps for location recognition and route planning, things that humans can do naturally. This is why HD maps are called a part of the brain of autonomous driving vehicles. Therefore, NAVER LABS is continuously developing a solution for creating 3D/HD maps. We are creating HD maps both indoors, using the mapping robot M1, and on the roadways, through the mobile mapping system R1 and aerial maps. However, there is still one more problem that needs to be solved. Updates. The form of the world is always changing. Therefore, for maps, keeping them up-to-date is akin to accuracy. Maps for robots or autonomous driving vehicles are no different. At NAVER LABS, as well, techniques to help with this problem are being researched, utilizing robots, AI, MMS (mobile mapping system), etc. Technology where Robots and AI Find Changed Shop Names Last year, we developed “self-updating map” technology that automatically discovers changes in shops within large-scale indoor spaces. Robots move about expansive and complex commercial spaces and accurately pick out changed shop names. To automatically analyze the images collected by the robots, computer vision and deep learning technology was also utilized. But since shopping malls are filled with so much visual information, it was very important to be able to differentiate from advertisements, people walking about, etc. and accurately perceive information about the shops. The algorithm developed at NAVER LABS to achieve this can very accurately perceive when a shop has newly opened, closed, or changed, or when just the name of a shop has changed, and the results of this have been presented at a computer vision/pattern recognition (CVPR) conference. Technology that Automatically Updates HD Road Maps This year, we are progressing with the ACROSS project to expand this type of updating technology to our roadways. Of course, the environment and conditions are very different from those indoors. ACROSS utilizes a method where mapping devices made up of low-cost sensors are installed on multiples vehicles which then all simultaneously identify changes in roadway information. The image data collected by the mapping devices is likewise analyzed by AI. It detects changes in the existing HD maps’ road layout (lane information, stop line locations, road markers, etc.) or 3D information (traffic signs, buildings, traffic lights, street lights, etc.). In reality, it must also sense changes in the seasons, time, and weather as well, and also be able to distinguish well between the cars on the roads. It is a task that is challenging in many ways, but we are continuously figuring things out. In the future, robots and autonomous driving technology will slowly break free from the lab and permeate our lives. To accomplish such an end, two things we have to prepare are HD map creation technology and an updating solution. To be more accurate, and always up-to-date! We are researching technology to accomplish this end.
NAVER's first robot was M1. Having made its debut in 2016, M1 is a mapping robot that creates three-dimensional high-precision maps of indoor space. Who were the maps for? They were for other self-driving robots. Uploading maps created by M1 to the cloud connected them in real time for other robots to perform autonomous driving. We have introduced this type of new self-driving robot platform under the name AROUND, and have continuously displayed the achievements. In short, M1 is the starting point for the cloud-based autonomous driving robot platform. Three-dimensional maps made by M1 also acts as the core data for a variety of location-based technologies. It also acts as the basic data for the self-updating map technology that updates indoor maps using visual localization, AR navigation, or robots and AI, ascertaining current indoor positioning where GPS is not available. It holds high application value in several forms, which is why we have diligently kept the version up to date. Upgraded M1 mapping technology –M1X mapping robot M1X is the successor to M1. While integrating the mapping technologies across the two versions, we were able to secure higher expandability and high-quality output even while streamlining expenses for the equipment. From the hardware aspect as well, we enhanced driving stability to enable spatial scanning while moving without wobbling, and made improvements to the issues of vibrations, which allow us to obtain much higher quality data. With an optimized sensor configuration, we were able to raise the quality of data even while reducing robot production costs, resulting in improvements to robot localization accuracy by over 30% when applied. If M1 mainly collected data optimized for self-driving robots, M1X is collecting data from more diverse sensors to enable application with a variety of indoor driving machines. The data from M1X is currently applied not only to self-driving robots, but also to higher accuracy localization services like AR navigation in smart phones. The technology to initiate space-based services In the upgrading processes of mapping technologies from M1 to M1X, important outputs have been obtained, such as indoor localization technology, the self-driving robot platform and AR navigation. Right now we are investing great amounts of time and effort for the technology to obtain data on everyday spaces even faster and without error because such innovative mapping technology marks the new beginning for all space-based services. > Subscribe to our newsletter
Professor Kim Sangbae, Director of the Biomimetic Robotics Lab at the Massachusetts Institute of Technology joined NAVER LABS as Technical Consultant on July 1. Professor Kim Sangbae, a distinguished robotics engineer, has developed robots such as the MIT Cheetah 1/2/3, Mini Cheetah, Hermes and Meshworm, and Kim’s Stickybot was even featured in Time Magazine as one of the Best Inventions in 2006. Currently, MIT Biomimetic Robotics Lab, led by Professor Kim Sangbae, and NAVER LABS are maintaining a relationship of continuous industry-academia cooperation. In particular, the MIT Cheetah 3 and Mini Cheetah, developed through the industry-academia cooperation with NAVER LABS, will be used as a technology to solve the mobility problems of robots in various areas including sidewalks. “There is no need for robots to do things that humans are well capable of doing. There are other areas that are suitable for robots.” “I think robots can play a big role in solving society’s impending problems, such as the declining workforce. Mobility is essential for such physical services.” - Professor Kim Sangbae, a quote from his NAVER seminar lecture The recruitment of a technical consultant is aimed to further organically strengthen cooperation with NAVER LABS. In particular, the focus of NAVER LABS on technology that provides practical help to people in various everyday spaces even correspond with each other even corresponds with the research philosophy of Professor Kim Sangbae. There will be very new and substantial synergies to accelerate our technology roadmap, including technical cooperation for designing and controlling systems/mechanisms, cross-training of engineers through human/academic exchanges, and discovering of talented individuals. > Subscribe to our newsletter
A-CITY is the future vision for cities pursued by NAVER LABS technologies. We research the technologies for a city where every city space is connected by diverse autonomous machines, artificial intelligence analyzes vast amounts of data to make predictions, and spatial data is informatized and updated to automate even services such as delivery and logistics. In order to achieve this, NAVER LABS is gathering a wide array of spatial data that comprise city spaces to make HD maps for the machines, and also developing an intelligent autonomous machine platform capable of transformation according to place, environment or purpose. We are also researching natural human-machine interaction (HMI) with the goal of providing useful services to people in everyday spaces. These are the core technologies where NAVER LABS is currently focusing on to make advancements and accelerate the arrival of A-CITY, a future vision for cities. M1 mapping robot, the beginning of indoor autonomous driving M1 is a mapping robot that produces high-precision 3D maps of indoor spaces. HD maps, made by applying SLAM technology with the point cloud collected by LiDAR, are used as the core data for diverse position-based services including indoor autonomous service robots. We are currently expanding data usability even more while also increasing accuracy via M1X, an upgraded version of M1. See more on M1 The core data for road-level autonomous driving: Hybrid HD mapping Hybrid HD mapping, an original HD map production solution for autonomous driving machines, extracts the layout information of road surfaces from aerial images that capture large-scale urban areas. Using the method of organically combining data gathered by R1, our internally developed mobile mapping system (MMS), it is effective for the quick and accurate production of HD maps of extensive areas. See more on Hybrid HD mapping Technologies for automatically updating HD maps For machine-readable maps, being up to date is of utmost importance. At NAVER LABS, we are conducting research for the ACROSS project for HD maps and the self-updating map technology for indoor maps. ACROSS is a technology that senses changes in the road layout (lane information, stop line location, road markers, etc.) and 3D information (traffic signs, buildings, signals, light poles, etc.) using devices equipped on numerous vehicles. The self-updating map is a technology that automatically recognizes changes in points of interest (POI) for large-scale shopping malls via AI and autonomous robots. See more on the ACROSS project See more on the self-updating map technology Mapping and localization for sidewalks with irregular surfaces and environments Sidewalks, which can be seen as the middle ground between indoor and road areas, are highly influenced by changes in the seasons and weather. That is why we at NAVER LABS are conducting a project called COMET for the development of mapping and localization for sidewalks. We are producing devices with a sensor arrangement that suits the sidewalk environment, and are also developing algorithms to process the data acquired by this mapping equipment. Although people may equip and test it in the short term, the technology is planned for future testing to enable direct acquisition of data by a four-legged walking robot that will be able to move around on diverse street surfaces. Cheetah 3 and Mini Cheetah, developed by MIT with funding from NAVER LABS, will be utilized. “R2D2: Reliable and Repeatable Detectors and Descriptors for Joint Sparse Keypoint Detection and Local Feature Extraction,” a visual localization research being conducted in NAVER LABS Europe that accurately ascertains specific locations in spite of environmental changes such as weather, seasons, time and lighting, also boasts highly innovative technology and won 1st place in the Local Feature Challenge of Long-Term Visual Localization at CVPR 2019. See more on the R2D2 project Seamless road-level precision localization technology NAVER LABS is researching technologies for autonomous driving machines to precisely estimate their own positions in real time even in the complex environments of cities. We applied our internally developed HD maps, like a virtual sensor, in order to perform seamless and stable localization even in areas such as dense buildings or tunnels where GPS is unreliable, and are advancing the technology to extract the most accurate coordinates by compiling the information acquired from the various sensors such as LiDAR, cameras, IMU and wheel encoders. See more on HD map aided localization VL technology, recognizing location indoors using just one photo Visual localization (VL) is a technology that analyzes an image to recognize the current location. It can ascertain the current position with high precision even indoors where GPS is not available. The VL technology by NAVER LABS retains the highest level of global competitiveness as a solution for recognizing location by extracting and comparing characteristic points from 3D data captured by the M1. This technology is currently applied for the indoor self-driving robot platform, and aside from VL, we are also concurrently developing AR technology that combines VIO (visual-inertial odometry) that tracks position by analyzing sensor and video data, VOT (visual object tracking) that recognizes objects and estimates the position or direction using 6DOF (six degrees of freedom), and other technologies. AR is also a very important technology for utilizing the space itself as an interface. See more on the VL technology AMBIDEX, a robotic arm that is anatomically analogous to a human arm Directly providing services to people in everyday spaces requires a robot arm that is both capable of high-precision motions while simultaneously ensuring safety. AMBIDEX, developed via industry-university collaboration between NAVER LABS and KOREATECH, is a robotic arm that can interact safely with people using an innovative wire-structure power transfer mechanism. We also added a waist section to expand the radius of activity. We are concurrently researching reinforcement learning via simulators and other processes to enable the performance of smarter and more precise service scenarios. See more on AMBIDEX An on-road machine platform for autonomous driving machines on the road After becoming the first IT business to receive the permit for provisional autonomous driving operation from the Ministry of Land, Infrastructure and Transport in 2017, NAVER LABS is advancing autonomous driving technology in all areas, from localization on actual roads to perception, planning and control. We are also producing HD maps for autonomous driving on roads using hybrid HD mapping and ACROSS solutions. Integrating these technologies and data, we are developing an autonomous driving machine platform that will allow customization for a variety of purposes such as logistics, delivery and unmanned shops. See more on the NAVER LABS autonomous driving technology Autonomous driving via the map cloud and reinforcement learning: the AROUND platform The AROUND platform is the independent solution developed by NAVER LABS with the goal of popularizing self-driving service robots. It identifies the location and plans a route based on the map cloud made by the M1 mapping robot, and has the unique characteristic of enabling smooth autonomous driving, without the help of laser scanners, by applying deep reinforcement learning algorithm. High-accuracy indoor autonomous driving is achieved using only low-cost sensors and low processing power, as opposed to the many pre-existing self-driving robots in which the core functions such as map creation, position checking, route creation, obstacle avoidance, etc. must be performed directly. See more on AROUND New possibilities for robotic services: the 5G brainless robot platform NAVER LABS succeeded in the first-ever 5G brainless robot demonstration at CES 2019. This technology involves moving the computer that serves the role of the robot's brain to the cloud and connecting with it via 5G. It effectively reduces production costs by enabling simultaneous control of numerous robots, and because the cloud serves the role of the robot's brain, robots can be made in small sizes with superior intelligence. We are expanding this technology through research in connection with the AROUND platform. Through all of this, the NAVER Data Center Gak strives to become the brain for numerous robots and to provide robotic services in various ways. See more on the 5G brainless robot technology > Subscribe to our newsletter
Technologies that connect NAVER with everyday physical spaces NAVER LABS held a press conference on June 25. After the recent CES 2019, it was an opportunity to reveal what kinds of missions and roadmaps are used in the development of the current technological advancements taking place at NAVER LABS. The vision for the future of NAVER LABS’ technologies presented on this day was summarized as “Connect NAVER to physical world.” The background for this direction originates from the rapid blending of the boundaries between physical and virtual spaces with technologies, such as high-performance sensors, AI, robots, and autonomous driving, each gradually approaching the critical point of popularization. Although NAVER has grown to be the dominant force of the online virtual space over the past 20 years, if NAVER is to continue to sustain its core value of connecting information and services, it is imperative to expand on modes, channels and methods. On this topic, NAVER LABS CEO Seok Sangok announced that the very space where users live may soon become a service platform, and NAVER LABS is concentrating on research for robotics, AI, autonomous driving, AR, and HMI (human-machine interaction) to enable the natural linking of physical spaces and NAVER services. The 3rd infrastructure: Auto-movables CEO Seok presented the concept of self-moving spaces, or auto-movables, for changing the city of the future. Seok emphasized that it will not be a matter of movables or immovables, but rather a third type of infrastructure that will greatly change the way we live, and that in the near future it will be these auto-movable spaces, equipped with information, services, and products, that will create entirely new connections for us. He also revealed the roadmap for the technological realization of these concepts. In order to provide information and services in a physical space, the very first thing you must do is to precisely digitize the space in question, and the actual provision of information and services will require tools with the same physical characteristics. NAVER LABS plans first of all to develop the solutions for building and updating 3D precision data for all spaces—indoor, outdoor and road-level—and to complete the autonomous machine platform that can move on its own in various spaces to provide information and services based on the technology and data of HD mapping, localization, 5G cloud computing, 3D vision, etc. It was also revealed that they plan to raise the completeness of HMI (human-machine interaction) for the natural provision of services among people in everyday spaces. A-CITY, the vision for a future city undertaken by the technologies of NAVER LABS 'A-CITY' is the vision for a future city that NAVER LABS is forming through this kind of technological road map. Each urban space will be closely connected by a variety of autonomous machines, with artificial intelligence analyzing massive data to make predictions, and the processes to informationalize the data, and even services like delivery and distribution will be automated. CEO Seok explained that, “NAVER LABS is taking on the challenge of A-CITY as a concept, but it is an inevitable future, and it is not limited to simple services like shipping and logistics, but is going to change the way we live with tremendous possibilities.” The smart autonomous machine platform that will connect all spaces: indoors-outdoors-roads CEO Seok introduced the solutions by NAVER LABS for the 3D space data that forms the most fundamental base for service robots, autonomous driving, AR, etc. Large-scale indoor spaces are covered by the M1 mapping robot. The upgraded version, M1X, has been developed to a level enabling completion of point cloud mapping within about 40 hours following wide-range scanning of massive indoor space. Next, the plan regarding sidewalks, regarded as the middle ground between roads and indoor areas, was also revealed. A project called ‘COMET’ is underway for the development of technology for mapping and localization on sidewalks, which have uneven surfaces and are highly influenced by lighting and the seasons. NAVER LABS, in collaboration with MIT, revealed that they plan to later apply this mapping equipment and algorithm to the four-legged walking robots MIT Cheetah 3 and the MIT Mini Cheetah. Following CEO Seok, Leader Peck Jongyoon, who introduced core technologies for autonomous driving from NAVER LABS, emphasized the unique characteristics of the road environment. Although sidewalks can be seen as spaces connected with indoor areas, roads differ completely by the traffic signal system, safety standards, etc. Having demonstrated the road autonomous driving technology as a “combined art” where technology from a variety of fields must be solved including mapping, localization, perception, prediction and planning, Peck stated that an HD map takes an especially important role in autonomous driving within downtown areas where there are numerous GPS shadow zones. The sensor data from the HD maps and GPS, LiDAR, camera, etc., all set up in-house, is being combined for the advancement of localization technology that offers very high precision while also working seamlessly. Leader Peck also introduced the hybrid HD mapping technology that utilizes the internally developed MMS vehicle, R1, and aerial photos to make a HD map for wide areas, announcing that within the year they will complete a layout map of 2,000km of main 4-lane or larger roadways in downtown Seoul. Starting with the current 300km section across the major areas including all of Gangnam-gu, Yeouido, the Sangam area, and Magok, the plan is to rapidly set up road layout data to cover the entire Seoul metropolitan area. The automation algorithm enabling road surface recognition through deep learning and vision technology to do this was also introduced along with research on the crowd-sourced mapping format HD map updating solution called ACROSS. Next, Peck announced that “the plan is to increase the number of vehicles with temporary permits for autonomous driving from the Ministry of Land, Infrastructure and Transport (MOLIT) in order to accelerate the development of technology for autonomous driving on the road,” and that the “goal is to later develop an autonomous machine for the road that can be utilized for a variety of purposes including distribution, delivery, and unmanned shops through algorithms and data verified in the actual roadway environment.” Introducing original, world-class core technologies CEO Seok Sangok and Leader Peck Jongyoon gave summaries of the world-class core solutions and competencies held by NAVER LABS. These included the innovative solutions for creating machine-readable HD maps for indoors and road level, VL (visual localization) technology enabling location verification in places without access to GPS using just a photo, the AROUND platform that allows smooth autonomous driving without a laser scanner through the map cloud and reinforcement learning, brainless robot technology utilizing 5G ultra-low latency characteristics, and the AMBIDEX robot arm with 7 degrees of freedom (DoF) and an added 3-axis waist section. Of all of these, CEO Seok revealed that the combination of the 5G brainless robot technology and the map-cloud-based AROUND platform, which gained high interest at CES 2019, is one of this year’s crucial missions. It is a strategy that, through collaboration with Qualcomm, commits to maximizing performance and usability by applying the technology of the world's first successful 5G brainless robot demonstration to the autonomous driving robot platform. The “5G brainless robot” technology, which moves the computer that serves the role of the robot's brain to the cloud and connects with it via 5G, can control a large number of robots simultaneously and thus effectively reduce manufacturing costs, and because the cloud replaces the robot's brain function it enables the creation of small robots with exceptional intelligence. Along with Qualcomm, the NAVER Business platform, KT, and others are collaborating to set up this platform, and it was announced that the NAVER Data Center Gak in Chuncheon will be preparing to serve as the brain for a variety of service robots. The VL technology, which can recognize the current location indoors using a photo, was also emphasized as holding the highest level of global competitiveness. This technology, which recognizes the location by extracting and comparing the unique elements from 3D data captured by the M1, offers groundbreaking solutions to the problem of localization indoors without GPS access. Apart from VL, it was revealed that AR technology is also in development, coupling with VIO (visual-inertial odometry), which analyzes sensor and video data to track location, and VOT (visual object tracking) technology, which recognizes objects and estimates position or direction using 6 DoF (forward, back, up, down, left, right). The role of NAVER LABS is to prepare for the foreseeable future using technology The roles of NAVER LABS as introduced by CEO Seok in this presentation were expressed as having accumulated rapidly because the technologies do not stay in the lab, but are rather oriented toward the spaces of our real lives. He hopes that technology will offer practical assistance to people in a variety of environments, and introduced the collaboration for this with the Seoul National University College of Nursing and the social enterprise ‘Bear Better’, adding that the open research environment where diverse experts can cooperate closely is also an advantage of NAVER LABS. Leader Peck Jongyoon responded to the questions following the press conference by saying, “We are developing our technologies with a sense of mission. The future is clearly approaching, but there are still not many places domestically that are passionately preparing for it.” He emphasized the importance of the core technologies currently being researched, stating, “I believe that, if we do not prepare for the future, the unfortunate circumstance may someday occur where we have no choice but to use the technologies of other countries.” In closing, CEO Seok Sangok stated that, “We are taking the lead in preparing for a future where our everyday physical spaces are recreated as new service spaces, within which spaces, machines, information and services are all naturally interconnected,” and conveyed that the goal is to “make the present that we feel is so normal to feel like an inconvenience of the past through technology.” > Subscribe to our newsletter
Cities, especially their features, changed greatly following the advent of the modern elevator in the 19th century. Going beyond the limits of flat land, high-rise architecture gave people a wholly new everyday space. The interesting history of innovations in locomotion has contributed to the transformation of our lives. NAVER LABS, having conducted research on technologies including autonomous driving, robotics and AI, is now concentrating on new concepts that will change the cities of our future. This is none other than the concept of self-moving spaces or auto-movables. In the near future, auto-movables will provide information, services, products and more, creating new connections. Imagine a city where each urban space is closely connected by a variety of autonomous machines and where artificial intelligence makes predictions by analyzing vast amounts of data, while processes of data informatization for spaces and even services such as delivery and logistics are all automated. We have given this future city the name A-CITY. This is the future that the technologies of NAVER LABS are now aspiring to achieve. Autonomous Everywhere–HD maps for machines The first phase to speed up the future coming of A-CITY is the task of making HD maps for machines. Such maps are the most basic data to allow the autonomous machines to move freely. Various spaces within a city have differing conditions for implementing autonomous driving technology. That is why a wide array of technologies must be applied to make HD maps for every space. Mapping robots called M1 will be tasked with covering large-scale indoor spaces like shopping malls. In order to quickly and precisely make massive HD maps of the roads on a metropolitan scale, we developed an original solution called “Hybrid HD mapping.” We are also concurrently developing a mapping technology for sidewalks where road surfaces are uneven and non-uniform. Updates are also important as cities continuously change their form. That is why we have developed a self-updating map technology through robotics and AI to ascertain changes in indoor spaces, and are currently conducting the ACROSS project for updating road-level data. The technology to mark every city space into HD maps with seamless connections—this is the foundation of A-CITY. Autonomous Everything–Intelligent autonomous machine platform Offering services where autonomous machines are useful requires a great deal of technology working in unison. For example, at NAVER LABS we are researching four-legged robots’ locomotion on uneven and non-uniform roads, artificial intelligence for smart robot services, robot arms to provide services directly to people, and even AR technology that uses the space itself as the interface. Even among these, the technology for high-precision localization to recognize current positioning is crucial. Although it may look easy to ascertain current positions, there are numerous environmental restrictions in the case of machines. For example, surely the very first thing that comes to mind when we say positioning is GPS. However, GPS does not work indoors. Even outdoors, we can experience intermittent interference in the concrete jungle of buildings in a populated area. We are researching a wide array of localization technologies here at NAVER LABS to overcome these types of issues. We are converging high-precision road-level localization linked to HD mapping, such as the technology to find location based on just one photo, to accurately recognize the location and enable effective route planning from wherever you are. At NAVER LABS we are also concentrating on 5G and cloud computing as an important solution for the popularization of robot services. The technology to move the robot’s computer, which serve cerebral functions, to the cloud and connect using “5G brainless robot” technology enables effective cost reductions by controlling numerous robots simultaneously. Since the cloud takes the role of the robot’s brain, it becomes possible to make smaller robots with outstanding intelligence. This is why we see this technology as the primer for the popularization of self-driving robots. Outside, however, the road-level autonomous driving technology works in a bit more special environment. There are set traffic rules to be abided and various signals to be read. Following the first acquisition by an IT company of the permit for provisional autonomous driving operation from the Ministry of Land, Infrastructure and Transport in 2017, NAVER LABS has been collectively advancing every area of autonomous driving technologies for roads including HD mapping, high-precision localization, recognition, planning, and control, all on actual roadways. We are forming an intelligent autonomous machine platform that combines all of the high-precision localization, 5G and cloud computing, and other various types of autonomous driving and robotic technologies. This platform will eliminate people’s concerns of technologies needed for the achievement of autonomous driving in all urban spaces. Instead, it will allow us to focus entirely on finding and designing new and valuable experiences to include in auto-movable spaces. Autonomous Everyday–New connections permeating our everyday lives Our essential goal is not to keep future technologies in the lab, but to infuse them into the spaces of people’s everyday lives. Interactions with users must feel extremely natural, with hardware that is dependable to allow error-free daily operations, and software with adequate testing to ensure that no erratic operations occur. Not only that, but the core algorithms that serve the machines’ intelligence role must be fully optimized, and spatial data must also be kept up to date. Although these are certainly not easy tasks, it is nonetheless the future that is sure to come. However, the technologies that we are researching are not only for certain special services. Just as the emergence of the elevator gave birth to unprecedented new living spaces in the form of high-rise buildings and as mobile technology brings variations with all new service experiences, the technology connecting auto-movable spaces will be expanded to include even more possibilities than we can currently imagine. A-CITY, to be filled with completely new connections—this blueprint, though still unfamiliar to us, will someday become a normal part of our daily lives. We are fully devoted to researching the core technologies that will make that day come sooner. > Subscribe to our newsletter
NAVER LABS presented its paper “Did it change? Learning to Detect Point-Of-Interest Changes for Proactive Map Updates” at CVPR 2019, the world’s largest conference on computer vision and pattern recognition, sponsored by IEEE. The paper imparted the research results of the self-updating map conducted jointly by NAVER LABS and the European research team of NAVER LABS over a period of about one year. Core technologies of NAVER LABS, such as robotics, computer vision and deep learning, were utilized to update map information to the latest state by having autonomous robots collect and analyze the data of large indoor spaces and recognize the spatial changes. Meanwhile, NAVER LABS Europe ranked first in the “Local Feature Challenge” category of the "Long-Term Visual Localization challenge". The challenge was to determine the current location of the nighttime photograph based on the daytime photograph and the shooting location of a particular landmark. This time, the Europe researchers of NAVER LABS successfully developed a deep learning-based feature that surpasses the scale-invariant feature transform (SIFT) feature that has been used for nearly 20 years in the field of local feature detection. Henceforth, it is expected to be applicable to various fields related to computer vision other than just visual localization. Related articles and websites A Self-Updating Map: Technology through which AI and Robots find changed signboard NAVER LABS' Indoor Dataset - COEX POI Change Detection (Jun. 2018 and Sep. 2018) CVPR 2019 Workshop: Long-Term Visual Localization under Changing Conditions Paper URL > Subscribe to our newsletter
NAVER LABS' indoor dataset is the result of scanning COEX, one of the largest shopping malls in Korea, twice at an interval of about two months (Jun. 2018 and Sep. 2018). This dataset consists of 17.5K geo-localized images with 578 points of interest (POIs) captured by a device called Pumpkin that has two LiDARs and multiple cameras. We currently only provide images taken by Pumpkin’s left and right side cameras, which are designed to capture storefront images that can be used for POI recognition and change detection tasks. In the near future, we will be releasing other images taken by other camera types as well, so this dataset will also allow use for VSLAM and visual localization research. Downloads COEX POI Change Detection dataset Scanning device: Pumpkin Pumpkin is equipped with the following main sensors: Cameras: 6 x Sony RX0 (2 with Wide Angle Lens: Samyang Fisheye Lens), 2400x1600, 2Hz, Anti-Distortion Shutter — 1/32000 super-high-speed shutter, ZEISS Tessar T* Lens, 84° FoV (Samyang Fisheye Lens: 106° HFoV, 70° VFoV) LIDAR: 1 x Velodyne Puck 16-channels Lidar, 360° HFoV, 30° VFoV, 4 planes, 10 Hz, 100m range, 0.1~0.4° Vertical resolution, 2.0° Horizontal resolution, Sensor Location Data format This dataset consists of images and their poses. The name of each image includes the serial number and timestamp as '[serial #]_[timestamp].jpg'. The poses where all images are acquired are in a separate file, 'sensor_trajectory.hdf'. In this file, 7-degrees-of-freedom (DoF) poses for all of the images are recorded. 7-DoF states are 'x, y, z' for position and 'qw, qx, qy, qz' for orientation, in serial order. In addition, each of the two tabs, pose and stamp, are paired, and the pose for the n-th stamp is the n-th in the pose tab. If you are more familiar with '.json' than '.hdf', you can download the file to convert it. How to generate data Data acquisition All of the images of this dataset were acquired by Pumpkin. To collect as much data as possible, we acquired images periodically and without stopping instead of by stop-and-go motion. As referred above, because RX0 has an anti-distortion shutter, we assumed that there is no distortion by movement. All of the data including point clouds and images were recorded based on the same timeline under the UNIX timestamp of the main processor. Estimating image pose For accurate pose estimation when each image was acquired, LiDAR-based SLAM was performed. However, since the acquisition from LiDAR and cameras didn't happen at the same time (i.e. asynchronized), linear interpolation based on timestamp gave the pose of Pumpkin when the image had been acquired. The pose of each image could be calculated from the relationship between the base of Pumpkin and each camera, and the pose was tagged for each image. Blurring To publish the dataset, we blurred faces in images with our object detection model. The model was trained by the data from Naver Street View, which includes face annotation. We ran the model on our images to localize the faces, and applied a median filter to blur the objects. The remaining faces that the model failed to localize were handled manually.
When GPS cannot be reached We can see our location just by turning on a navigation device or map app. We are used to this. This is thanks mostly to the GPS. But what about indoors? Verifying a location indoors with no GPS signal remains a troublesome task. This is a problem that can be solved with a guide service or an indoor self-driving robot. The technologies and infrastructure do exist. Let’s take a look at what technologies NAVER LABS is using to find solutions. A technology where just a photo is enough to recognize a location We have raised the bar for visual localization (VL) technology. VL is a technology that determines a location using an image. In a way, this resembles our own daily experiences. People also view their surroundings with their eyes to identify where they are at any given moment. Of course, the scenery is a bit different from what people see. VL looks for distinct features in the image to identify positional information. Visual localization demo At NAVER LABS, we use the mapping robot M1. We extract distinct features from the data filmed by M1 to produce a “features map.” Information used for calculating a position is also included in this map. Using this feature map, positioning services can be performed with just one picture taken on a smartphone. The error is much smaller when compared with GPS. Not only that, but it can even accurately measure the direction you’re facing. Uninterrupted positioning is also important We know that the current indoor position can be identified using VL technology. But neither people nor robots just stand still in one place all the time. They move around. Naturally, a precision positioning technology for situations involving movement is crucial. The technology used for this scenario is Visual Intertial Odometry (VIO), which analyzes sensor and video data to track a position. This technology also incorporates the use of an optimization algorithm. This is to enable uninterrupted positioning in real time on a smartphone even with a limited network connection and a low-performance camera. Comparison: (from left) VL alone → VL + VIO → VL + VIO + optimization algorithm Essentially, VL technology tracks one’s current location, and when in motion the real-time position is tracked using VIO with applied optimization engineering. These positioning technologies are used in the Indoor AR Navigation and Indoor Self-driving Robot developed by NAVER LABS. There is one more positioning technology that is useful for Indoor AR Navigation. It is Visual Object Tracking (VOT). This is a technology that can estimate the position or direction of a moving object by 6DoF (six degrees of freedom: forward, back, up, down, left, right) using image recognition technology. In an environment where VL does not function properly or is inaccurate, VOT is used to identify the exact location of an object or add content for specific areas. VOT (visual object tracking) demo The starting point of indoor location-based services: positioning The core context of location-based services is, quite obviously, location. That is why when we say we’re solving the problem of positioning indoors where GPS doesn’t function, it also means that we’re promoting the birth of new services that we have never been able to experience indoors. No longer will we have to struggle to find our way around a big department store when we go for the first time, and robots will also be able to provide services while planning and following routes on their own. AR, expanding space itself as an interface, can also lead to more varied and useful services based on user location. This is the motivation behind our continued research on indoor positioning technologies. > Subscribe to our newsletter
What’s ACROSS NAVER LABS’ ACROSS is a project that has been initiated to develop a crowdsourcing map solution to maintain the recency of HD road maps. Background "An HD map is the most essential piece of data required to enable autonomous driving on the road" Precise HD maps are essential for an autonomous-driving machine. The HD maps allow you to better recognize your current location. The sensors equipped in the machine may sometimes not be enough to do that job. This prior knowledge can be useful when planning a route to drive and predicting which areas will have to be given more attention. Therefore, the importance of an HD map grows greater in a complex large city. That is why NAVER LABS has continued to develop HD Maps with a unique technology called hybrid HD mapping to this day. Hybrid HD mapping is a method where a wide range of road layout information is first obtained through aerial photographs, then it collects and organically combines point cloud data on the road with R1, an independently developed mobile mapping system (MMS). This solution possesses the strength of allowing something on the scale of a large city to be constructed in a more cost-effective manner within a shorter period of time, while, of course, maintaining a high level of precision at the same time. However, there is still something missing. It’s the destiny of all maps. Keeping them up to date. Maps basically reflect reality, but not the present. The time when a map was made will always be in the past. After this time, a new road may have appeared or a new building may have been built. Therefore, an updating solution is directly related to maintaining the precision of a map. (The same is true for the self-updating mapping introduced earlier, which is technology for keeping indoor maps up to date that utilizes robots and AI). Approach "The dilemma of crowdsourcing, a tradeoff between the costs and performance of sensors in the mapping device" That is why hybrid HD mapping technology also requires an updating solution. The ACROSS project is research aimed at developing such a solution. We have selected the crowd sourcing mapping method. Through this method, mapping devices are installed inside multiple vehicles to simultaneously identify changes in road information over a wide scale. We are currently developing a solution that detects and updates changes in the road layout (land information, location of stop lines, road markers, etc.) or 3D information (traffic signs, buildings, traffic lights, streetlights, etc.) through the processing of image data collected by sensors inside mapping devices. However, there remains a dilemma for us to overcome. It is that we have to make mapping devices highly compact with low-cost sensors (cameras, imu, gps). By doing this, they will be able to be equipped in more vehicles and the issues concerning the coverage of detecting changes in an HD map and its cycle can be addressed. However, designing a mapping device with low-cost sensors and processors will inevitably result in performance tradeoffs. In the end, device design to facilitate wide use and algorithm optimization constitute the core of the ACROSS project. To this end, a wide range of technologies developed by NAVER LABS, including sensor fusion, computer vision, image processing, and machine learning are being continuously applied. 5G networks also offer a new opportunity for ACROSS. By using the high bandwidth of 5G, a change in the environment where the map information can be received faster and updated simultaneously has been initiated. Above all, more attempts and choices have become available between the cloud and edge computing in order to achieve optimization between devices and algorithms. Challenge "A world where high-precision 3D data on cities and roads are updated in real time." We expect that there will be many trials and errors along the way towards the success of the ACROSS project. We remain relentless in our efforts to overcome challenges that have not yet been mentioned. However, it is important to remember that these are crucial trials and errors. Throughout such fierce challenges, the core technologies for HD maps and autonomous driving on the road will be ultimately acquired. For this year, we will focus on designing the most ideal mapping device for ACROSS and optimizing algorithms based on such findings. Once this step ends in success, we will attempt to undergo a more diverse set of semantic mapping steps. Autonomous driving machines will form part of our lives in the future. HD maps for these machines will be there first, and then autonomous-driving machines will come with the ability to automatically update HD maps on their own. High-precision 3D data on cities and roads will create an organic, virtuous cycle. Even more precise and even more up-to-date. The ACROSS project is preparing such a world. We will continue to share the progress and achievements of the ACROSS project. > Subscribe to our newsletter
1. 5G is a whole new change beyond fast internet. 5G is the next generation communication technology of 4G, which is commonly called LTE and has the ultra-high speed, hyper connectivity, and ultra-low latency capabilities that are upgraded in all aspects. With fast 5G speeds, network capacity will be virtually unlimited, and latency will be extremely low to the point where almost no latency is felt. Does this only mean improving things that we enjoy right now? Now that the role and importance of mobile networks have greatly increased, 5G is anticipated to bring about a whole new change to mobile communications, surrounding ecosystems, and related industries. Along with technological advances such as Artificial Intelligence and XR, many things that were previously impossible or restricted are being newly attempted with 5G technology. 2. The most powerful use of 5G is with robots. Although Korea has been introducing 5G technology at a fast speed, there are not yet enough cases of using 5G technology effectively. So, currently, the main 5G use cases are only demonstrating large-size video transmissions using a one-directional signal pattern. On the other hand, in order for a robot to move, the role of 5G is essential because it has a “bi-directional signal pattern” that requires constant data exchange between sensors and a high-performance computer. Furthermore, if the communication latency becomes extremely low, we can set up a theory that a high-performance robot control will be enabled even if we separate the robot's computer from the main body. To verify this, NAVER LABS created a robot demo operated by 5G technology for the first time in the world jointly with Qualcomm and successfully demonstrated it at CES. We tried to effectively use and apply the capabilities of 5G. 3. 5G can create a “Brainless” robot that never existed before. The robot demonstration performed at CES with Qualcomm was a 5G brainless robot that succeeded in pulling out the high-performance computer, which acts as the brain of the robot, from the main body. With the use of 5G's low latency, we can separate the robot from the area that acts as the “cerebrum of a human” requiring the greatest processing power. In the 5G era, the MEC server of the communication base stations will act as the cerebrum for controlling the posture and motion of the robot, and the cloud will act as the robot's brain. In other words, we will be able to implement a “brainless” robot in which the 5G connected-cloud acts as a brain. 4. 5G Brainless Robot technology eliminates robots’ physical limitations. Now it has become possible to separate the brain from the robot using 5G technology. So what will be possible in a robot? Until now, a small robot could have a small computer only. It was literally a physical limitation. However, if the cloud can act as the brain of the robot, we can create a highly intelligent robot regardless of the size of the robot. This means a palm-sized robot with the intelligence of a high-performance computer can appear. If 5G brainless robot technology is implemented, we can control multiple service robots simultaneously based on the cloud. Robot algorithms with high difficulty can be provided through the cloud and update is easy, and we can also rationalize the production costs since there is no need to add high-performance processors to each robot. In addition, high-performance processing power can be placed outside, significantly reducing the battery consumption of the robot itself. 5. 5G Brainless Robot technology is a catalyst for the popularization of robot services. One important thing for the popularization of robot services is to maintain the required performance while lowering manufacturing costs. This way, we can lower access hurdles for the commercialization of robot services in various industries and spaces, and accumulate service know-how through various attempts in actual spaces, which will accelerate the popularization. 5G and cloud technology, which can overcome physical limitations and simultaneously control multiple robots with lower power, will be an important solution for the popularization of these service robots. Through CES 2019, NAVER LABS has successfully demonstrated the world's first 5G Brainless Robot with Qualcomm. In MWC19, subsequently, KT, Intel, and NAVER Business Platform have started joint development of a 5G-based service robot. It is a structure to develop service robots with NAVER Business Platform under an ultra-low latency environment using Intel and KT’s 5G solution. Here, the NAVER cloud platform will act as the brain of the robots. Through 5G, the popularization of service robots is approaching much faster than we had imagined. NAVER LABS, starting with 5G brainless robot technology, will continuously produce products that can accelerate the popularization of service robots. > Subscribe to our newsletter
NAVER LABS is taking part in public-private partnerships with 17 public agencies and private enterprises, including the Ministry of Land, Infrastructure and Transport, National Geographic Information Institute, Korea Expressway Corporation and other pertinent enterprises, to build and update its HD map for autonomous driving vehicles. HD maps, while being themselves “sensors” for autonomous driving vehicles, are an essential infrastructure that can act as the “brain” for autonomous driving vehicles. NAVER LABS has recognized the importance of HD maps for autonomous driving vehicles and has been focusing its research on HD maps. Since 2017, NAVER LABS has been testing autonomous driving vehicles on actual roads after being granted a permit for provisional autonomous driving operations by the Ministry of Land, Infrastructure and Transport. NAVER LABS is advancing new autonomous driving solutions, such as the technology to stably ascertain locations in cities with high-rise buildings based on HD maps. In particular, at CES 2019, NAVER LABS introduced its ingenious HD mapping solution called “Hybrid HD Map,” which combines aerial photographs and MMS data. Through an MOU, the Ministry of Land, Infrastructure and Transport will support research and pilot projects to arrange for a joint establishment system, while the Road Management Authority will cooperate in establishing and updating major roads and sections of the pilot project. NAVER LABS has partnered up with many private enterprises as well, and plans to take part in partnerships with many public agencies and private enterprises to build self-driving infrastructures based on research accumulated so far.
You have probably already experienced this a few times: you go to a shop you have not visited in a while and end up having to turn back as it is changed name. In fact, domestic spatial information is said to fluctuate by a degree of more than 30% each year. The world is constantly in flux, even at this very moment. So, in other words, a map that has been recently updated is an accurate map. If map data management is limited to manual management, the update cycle is slow and production costs are steep. Additionally, maintaining the recency of maps is of major concern for map users, as well as online map service providers. Therefore, developing automation technologies for map updates is crucial. For this cause, researchers at NAVER LABS and NAVER LABS Europe have conducted joint research and developed technology for a "self-updating map.” This technology keeps map information up-to-date by recognizing business names that have changed through analysis of large indoor spatial data which is collected by an autonomous driving robot. To achieve this end, Naver’s core technologies, such as robotics, computer vision, and deep learning, are being utilized. Automatically updates changes in signboard using AI and a robot We first conducted tests for this map updating technology with a focus on large shopping malls. Tests were carried out in a space where new stores open and other changes occur frequently. The self-updating map technology picks out only the stores that have changed in a large, complex interior space and contributes data that allows for the automatic and accurate updating of map information. The entire system is organized as follows. First, the autonomous driving robot moves around and collects images and positional information inside the shopping mall. Then, after some time, we take pictures of the same place again. We compare the map and location information of both images to find the same spot, and determine immediately whether any changes have occurred by using deep learning technologies. We have to be careful about distinguishing whether signs are a storefront or a just an advertisement, because shopping malls are spaces with so much exposed information. The algorithm that we developed is capable of accurately recognizing when stores open, close, change, or just change their names in each shopping mall, over a period of time. We have verified that it is suitable for efficiently managing large-scale POI information using computer vision and deep learning technology paired with an autonomous driving robot to maintain the recency of indoor map information. Although autonomous service robots have not yet been popularized, in the near future many people will live in spaces where they interact with robots frequently. Those robots will be able to provide a variety of services including item delivery, security, and guidance, while simultaneously keeping indoor map information up-to-date by utilizing our self-updating map technology. The outcome of joint research between NAVER LABS and NAVER LABS Europe, to be presented at CVPR This technology was jointly developed by researchers at NAVER LABS and NAVER LABS Europe over a period of one year. The results of this research will be presented at the Conference on Computer Vision and Pattern Recognition (CVPR) which is being held in California, USA this coming June, under the title of “Did it change? Learning to detect point-of-interest changes for proactive map updates.” We will be able to attempt a variety of projects in the future based on these results. We can try either to reflect various spatial data such as sales information, other than just business names changes, in real time on a map, or try to recognize and update spatial information changes on roads, i.e. non-indoor spaces. The world is constantly in flux, changing at this very moment. However, technologies are also constantly being developed: technology that will allow us to catch up with these changes. > Subscribe to our newsletter
Meet the wearable robot technology that increases physical strength in real life
In 2017, NAVER LABS first introduced an electric cart called AIRCART, which incorporates wearable robot technology to enhance physical movement by increasing strength and endurance. The cart can be worn by workers who move heavy cargo or by people with physical disabilities, to significantly increase their muscle strength or mobility. To apply this technology in areas where it can benefit people in a day-to-day basis, NAVER LABS applied it to a cart, a tool frequently used by many people. AIRCART can easily transport heavy weights even with a light push. It moves easily up the hill, and returns safely down the hill using an automatic break.
AIRCART has received a lot of attention, not because of the complexity of the technologies implemented on it, but because of its applicability in the real life. A reference model was actually put to use in a book store, which was followed by technological collaborations for commercialization. Last year, the AIRCART OPENKIT, which incorporates the patented technology and design of AIRCART, was accessible to the public for six months while we began working on projects that aimed to apply the AIRCART technology in different areas. This is one of the results of that process, a wheel chair version of AIRCART.
A wheelchair that can be pushed with one hand and allows to make eye contact with the wheelchair user
As the society ages, the demand for wheel chairs is increasing every year. One thing that we took notice of was that a large proportion of care givers or guardians of the elderly were also elderly people themselves. Not only that, 40.3% of the people who have used a wheelchair reported having experienced an accident during use (based on the Survey on the Usage Status of Electric Assisting Devices, 2015). These are issues that can be solved by applying the AIRCART technology to wheelchairs.
The AIRCART Wheelchair is equipped with the core technology of AIRCART, that is, physical strength and endurance enhancing technology. It is designed to allow anyone to push the wheelchair easily and safely with a small force regardless of the weight of the person who is on the wheelchair. When going down a slope, often a dangerous situation, it automatically maintains a certain speed so that the person pushing the wheelchair does not have to pull from it to keep it from rolling. In case the person loses hold of the wheelchair, AIRCART breaks automatically and stops.
At CES 2019, NAVER LABS, in collaboration with Qualcomm, demonstrated the “5G Brainless Robot,” being the first in the world to successfully showcase the technology that uploads a high-performance computer, which corresponds to the robot’s brain, to an external cloud. This progressive collaboration with Qualcomm’s brilliant members has created outstanding synergy to successfully carry out challenging tasks. Moreover, NAVER LABS met new partners at MWC19. KT, Intel, NAVER Business Platform and NAVER LABS decided to cooperate with one another based on each other’s respective technology and infrastructure for 5G-based service robots. Such cooperation incorporates KT and Intel to utilize their 5G solutions for ultra-low latency configurations to develop a service robot platform. Through this, NAVER Cloud Platform will serve as the robot’s brain. 5G and cloud technology will be important solutions for the popularization of service robots. Still, there are many more possibilities and challenging projects. In order to make possibilities a reality, NAVER LABS will continue to work with brilliant partners via technological cooperation.
Technology that makes possible the external removal of the brain of self-operating robot When we announced our 5G robot technology at CES 2019, many people assumed that the demonstration would be remote control-based, which is quite a cool piece of technology, in and of itself. However, NAVER LABS has, in collaboration with Qualcomm, gone a step further to tackle a more challenging project, known as the “5G Brainless Robot.” In essence, this technology takes the high-performance computer, functioning as the brain of a self-operating robot, out of the main body of the robot. Despite the initial unfamiliarity of the idea, everyone, at one point or another, will have witnessed something similar in sci-fi films. In the movie, Avengers, for instance, it might not have felt so strange to see the cyborg Chitauri warriors collapse in tandem upon the destruction of the mothership. This idea, in fact, captures the basic essence of what brainless robot technology is. The minute decision-making that empowers the cyborgs to attack Hulk or thwart Thor’s assault are all formulated within the mothership and delivered via a wireless network – most likely 5G or higher. Had the telecommunications been 3G or 4G, a cyborg would have no choice but to helplessly take the full brunt of Captain America’s punches due to the inability to avoid them on account of signal latency, even following the recognition of an imminent attack and the delivery of a command in response. Robots featuring ultra-reliable and low-latency 5G technology Latency simply refers to the time required to give and react to a command. 5G is an ultra-reliable and low-latency communications technology with a latency of merely one millisecond, i.e. 0.001 seconds. It is one of the core technological features of 5G that is attracting significant attention. The ultra-reliable and low-latency characteristics of 5G being applied in a robot’s control cycles can enable some very fascinating possibilities (a control cycle denotes the time required to process signals collected by a sensor and deliver them to a motor). Many humanoid-type robots are typically constituted of more than 100 sensors and 30 motors, and the average cycle, during which data collected from sensors is processed prior to the delivery of commands to a motor, is about 5 milliseconds. However, the latency of 5G communications is a mere 1 millisecond, shorter than that of the control cycle. Thus, it becomes possible to connect a high-performance robot for posture and movement control via communications technology to an outside “brain,” instead of integrating it within the robot. This essentially means that an MEC server, or a 5G Cloud connection, may serve the role of a robot’s brain to actualize a brainless robot. NAVER LAB’s 5G brainless robot technology garnered significant attention at this year’s CES thanks to the successful achievement of high-performance robot control utilizing 5G’s ultra-reliable and low-latency features. Theoretically, it may sound easy, but 5G technology is an area that has yet to be thoroughly explored. In particular, a robot’s high-precision control through a 5G connection is established via countless signals and processing data going back and forth, making its degree of difficulty extremely high. Looking at the pole-balancing demonstration of the robot arm, AMBIDEX, numerous commands to detect a pole with a tilting center of mass while adjusting the arm’s balance are repeatedly delivered through the 5G network. Setting technical difficulties aside, what kinds of possibilities could this technology then be able to provide us with in real life? Advantages to externally relocating the robot brain Members of NAVER LAB’s Robotics Team actively dedicate themselves to the research of robots that ultimately provide services to people. The majority of such robots require the installation of high-performance computers inside the main body frame, which sounds, and actually is, expensive. This is exactly why reducing the cost of production is a prerequisite for the popularization of robots; hence, the Cloud-based service robot platform being a viable solution (NAVER LAB’s AROUND platform, which conducts self-driving indoors based on a map cloud produced by a mapping robot named “M1,” was developed under a similar context). If the ultra-reliable and low-latency performance of 5G is utilized, however, it is possible to separate the processor from the robot, up to the domain that corresponds to a robot’s cerebrum, which requires a significant amount of processing power. Since an external server can control a number of robots simultaneously, each robot does not need to have a high-performance processor embedded inside of it, thereby reducing production and maintenance costs. The reason being is that the cloud can integrate and analyze data collected by several robots, and then conveniently update itself via a newly-learned algorithm. Furthermore, the power consumption of robots would become that much more efficient. The main computer of a robot consumes far too much battery power, similar to that of about 20% of the human body’s entire energy use being devoted to neural activity. The percentage of energy consumed by the main computer can go as high as 40% for self-driving robots. In other words, simply making the high-performance processor external would lead to a remarkable decrease in battery consumption. In fact, the battery charging period acts as a key factor in service robot usage. There is yet another interesting advantage. Would it now be possible to create a small robot with a high-performance computer? In the past, only a small-sized computer could be embedded in a small robot, due to physical limitations; however, if the cloud serves in the role of the robot’s brain, this would open up the possibility of creating super-intelligent robots, regardless of size. Technology popularizing service robots NAVER LABS conducts research in ambient intelligence technology that can reconcile with the physical spaces in which people dwell, while naturally providing information and services. Service robots are a core platform to achieve this, highlighting 5G and Cloud technology’s importance in the popularization of these service robots. CES 2019 provided a platform for the competent and proud engineers of NAVER LABS and Qualcomm to successfully conduct the challenging task of demonstrating the functionality of the world’s first-ever 5G brainless robot. Also, at MWG19, NAVER LABS agreed to commence collaborative efforts in 5G-based service robot development with KT, Intel, and NAVER Business Platform. The goal is to develop service robots by utilizing various 5G solutions offered by Intel, as well as provide robot services under ultra-reliable, low-latency conditions that utilize KT’s 5G telecommunications network and Edge Cloud infrastructure, while empowering the NAVER Cloud Platform to function as the robot brain. We anticipate more progress ahead as the specialized engineers of each company devote their collective energy to actualizing the future of robot technology. It is undoubtedly thrilling to be at the moment where the imaginations of the past are able to be realized by the technology of the present. We aim to passionately collaborate through the best of partnerships so that we can produce something no less than extraordinary. > Subscribe to our newsletter
How close is the future we once imagined? You can find out by visiting this event. This is the Consumer Electronics Show (CES). CES is now the largest technology exhibition in the world. CES 2019 was a special event for NAVER and NAVER LABS because we held our first official booth. We unveiled new technologies that integrated our research results, in areas including robots, autonomous driving, and AI, with 13 new products. Let us introduce the highlights of this exhibition. 5G Brainless Robot: a technology for taking the “brains” of the robot out of its “body” The main topic of this most CES 2019 was 5G. NAVER LABS gave demonstrations for innovative 5G technologies that could be seen in science fiction movies. It is the 5G Brainless Robot technology. This technology is used to pull the high-performance computers, which function as the robot's “brain,” out of the robot’s “body.” Then, an external cloud connected to a 5G network serves as the robot's brain. The reason this technology received special attention at CES is that we, in collaboration with Qualcomm, were able to achieve high-performance robot control, for the first time in the world, which fully utilized 5G's high-performance. The potential futures use of this technology is innumerable. The NAVER data center can function as the brain of service robots working all over the world. Since multiple robots can be simultaneously controlled, there is no need to install a high-performance processor in each robot. It will become easier to integrate and analyze data collected by multiple robots and to update them simultaneously with new algorithms as they are refined. In many ways, this is a key technology for cloud-based robot services. > Learn more about 5G Brainless Robots Hybrid HD Map, a unique HD Map solution for autonomous vehicles A new technology geared towards autonomous vehicles has also been unveiled. NAVER LABS has been researching autonomous vehicles since 2016 and introduced this Hybrid HD Map technology at CES. HD Maps are a critical piece of data for autonomous vehicles. Making good use of an HD Map allows the vehicle to know its current location more accurately and to plan routes safely and effectively. The Hybrid HD Map technology that NAVER LABS has demonstrated is quite novel. Unlike other methods where the HD Map is made elsewhere, NAVER uses aerial photographs taken from airplanes and MMS vehicles together in a two-part process. First, the layout information of the road’s surface is extracted from the aerial photographs. Then, that data is organically combined with data collected by R1, a self-developed MMS (mobile mapping system). It is an effective way to produce a vast HD Map on a city scale more accurately and quickly than ever before. > Learn more about Hybrid HD Maps AROUND G: a robot that drives autonomously without using a laser scanner AROUND G is a robot that guides people through AR in large complex indoor spaces such as shopping malls, airports, and hotels. The indoor autonomous robot is itself no longer a new technology. There were many others at the most recent CES. However, AROUND G has a distinct point of difference. AROUND G does not use an expensive LiDAR (a laser scanner). A laser scanner is a device that perceives a robot’s surroundings through the speed at which light strikes objects and is reflected back. Many autonomous machines use this type of sensor. The problem is that it is expensive. What NAVER LABS has been researching is how to make fluid autonomous driving while only using very cheap camera sensors, not expensive equipment. It is because we believe that such technology is needed in order to popularize autonomous robots. For AROUND G, many of the features required for autonomous robots are handled in a map cloud, and the robot itself is equipped with only low-cost sensors. Even low-cost sensors are sufficient for it to move very fluidly between obstacles and pedestrians because it uses a deep reinforcement learning algorithm. This surprised many companies developing autonomous robots at the recent CES. > Learn more about AROUND G AHEAD: a 3D AR HUD We also drew attention from many automobile manufacturers and electronic parts companies for AHEAD. AHEAD is a 3D AR HUD (head-up display) for vehicles. AHEAD utilizes 3D optical technology that provides information adjusted so it looks like it is on the actual roads right in the driver's natural line of sight. There are many advantages to having the actual road and the display point of information look like they are the same distance. Since the images displayed on AHEAD look as if they actually on the road, the gap that existed between the road that the driver must pay attention to and where they have to look for information on a traditional dashboard is reduced, improving safety. This technology helps solve concerns regarding existing HUDs, where the focal point between the displayed virtual image and the actual road are different which can distract the driver. AHEAD provides information naturally without disturbances and allowing the driver to keep their eyes forward, and can be a new display solution connecting vehicles and information. > Learn more about AHEAD We have also released many other technologies. The future that NAVER LABS has envisioned so far is integrated into a technological vision called ambient intelligence. Ambient Intelligence is a technology that understands user environments beforehand and provides them with the necessary information and services before they even request it. This is the future of NAVER. To this end, we have been researching technology for collecting high-precision data, such as indoor paths and roads, and using it to provide information and services through various robots and computing devices. "You mean to say that all this was developed by NAVER?" This was a question asked by a person who was happy to find the familiar NAVER logo at CES and visited our booth. Perhaps, as much as he was familiar with NAVER, he was also excited and surprised to discover the new technologies that we exhibited. There are still many people we find who are unfamiliar with the fact that NAVER is developing robots and researching autonomous technologies. However, these are the technologies we need to prepare for the future. These key technologies will be mixed into future NAVER services and will provide users with information and services in new ways. That is why they fit the theme of this exhibition, "the possibilities of new connections and discoveries through technology." In addition to the technologies introduced above, you can find more information about exhibits displayed at CES .
Last year, we unveiled the xDM platform for the first time at DEVIEW. The xDM platform is an integrative location and mobility technology that combines other technologies being researched at NAVER LABS, including robot and AI-based HD mapping, location and navigation technologies, and precision data. The aim of the xDM platform is to develop various mobility and space-based services. As part of the effort, we introduced various location-based services and self-driving services through the xDM platform at CES 2019. This includes NAVER LABS’ AR navigation, self-driving vehicle, service robot, and ADAS. Furthermore, today we begin our collaboration with LG Electronics, applying our xDM platform to LG Electronics’ CLOi robot. Applying the xDM platform to robotics, it is possible to render an indoor self-driving technology supporting precise control through the use of only low-cost sensors and low processing power. This is achieved by dividing the required functions and roles, that is, by allocating the map creation task to a mapping robot, and the location identification and route creation tasks to the xDM cloud. Through the partnership with LG Electronics, we intend to amplify the efficiency and precision of the CLOi robot by applying the strength of the xDM platform while perfecting it accuracy as an integrative location and mobility platform by utilizing the newly collected data. NAVER LABS will continue the joint research and development efforts with LG Electronics regarding the application of the xDM platform to other devices. We plan to conduct demonstration projects for performance improvement and optimization, and find new ways to utilize the data collected through the collaborative project between the CLOi robot and the xDM platform. Integrating proprietary technologies of the two companies, we expect new technological innovation to arise from the achievement of a great synergy effect. The ambient intelligence research of NAVER LABS aims to provide useful services that naturally integrate into the daily living space. We intend to develop new services and tools that understand the contexts of everyday life in all spaces you where people reside. Hand in hand with a great partner, we will continue our efforts to realize this vision.
NAVER has proudly unveiled its booth at CES 2019. The booth is located in the Central Plaza of Tech East. See booth location and overview ■ AMBIDEX Demonstration - The World’s first 5G brainless robot AMBIDEX, which uses innovative cable-driven mechanisms, is a robot arm capable of interacting safely with humans. Working together with Qualcomm, NAVER LABS successfully demonstrated the 5G capabilities of AMBIDEX at CES. The advanced technology enables precise control over the robot using the low latency of 5G networks, and does not require high performance processors. ■ AROUND G Demonstration - The culmination of xDM platform technologies AROUND G is an autonomous guide robot that provides guidance in large indoor spaces such as shopping malls, airports and hotels. It is the culmination of technologies being researched under the xDM platform, including HD mapping, visual localization, robotics, AI, and AR navigation. A distinct feature of the robot is that it functions smoothly as an autonomous guide using the deep reinforcement learning algorithm, without having to rely on expensive laser scanners. ■ NAVER LABS’ diverse location & mobility intelligence technologies NAVER’s booth is largely comprised of an indoor section and an outdoor section. This concept mirrors the characteristic of location and mobility intelligence technology, which functions seamlessly across indoor and outdoor environments. The exhibition features NAVER LABS’ key research outcomes, ranging from on-the-road R1 to indoor autonomous robots. See details on exhibits
At CES 2019, NAVER LABS presents its latest location and autonomous mobility intelligence technologies, developed with the goal of achieving ambient intelligence. See booth location and overview ■ xDM Platform eXtended Definition & Dimension Map The xDM platform is an integrated location and mobility solution for smartphones and autonomous machines. It combines our portfolio of robotics, autonomous driving and AI-based technologies such as HD mapping, localization and navigation with high-precision spatial data. Key features are self-updating 3D/HD mapping, precise indoor and outdoor positioning and context-aware location information based on real-time spatial data. The platform solution supports the development of future location-based services such as augmented reality (AR) walking navigation and autonomous mobility services that include autonomous vehicles, service robots and advanced driver-assistance systems (ADAS). ■ Mapping Solutions M1, Indoor Autonomous Mapping Robot M1 is an indoor 3D/HD mapping robot that navigates autonomously in indoor spaces. M1 automatically collects high-resolution images and 3D spatial data via high-performance cameras and LiDAR, significantly improving the efficiency of what was previously a manual mapping process. The resulting HD maps provide spatial data that is essential to location-based services, such as AR walking navigation and indoor autonomous service robots. Self-Updating Map NAVER LABS uses cutting edge AI technologies for advanced research on self-updating maps. The technology uses data collected by indoor autonomous robots and advanced AI solutions developed by experts in robotics, computer vision, deep learning and machine learning. Point of interest (POI) change detection technology detects and updates information on individual stores in large shopping malls. Further research advances on POI attribute recognition and semantic mapping technology will be phased in over the next few years. ■ Autonomous Robots AROUND Platform, Autonomous Service Robot Platform The ambition of the AROUND platform is to commercialize autonomous robot services. The key functions of the autonomous robots are distributed between mapping robots and the xDM cloud. This separation significantly lowers the manufacturing costs. The mapping robot retrieves the spatial data by navigating the indoor environment. The map data is then uploaded to the xDM cloud from where autonomous services are delivered through cloud-based visual localization and path planning. The collision avoidance algorithm that runs on the edge, ensures that the AROUND platform effectively responds to unexpected circumstances and avoids obstacles until the destination has been safely reached. Depending on spatial characteristics and user needs, it can be customized to serve different purposes, from delivering books in a library or store to giving directions in a shopping mall. AROUND G, Autonomous Guide Robot AROUND G is an autonomous guide robot built on the AROUND platform. It provides guidance in large indoor spaces such as shopping malls, airports and hotels, and provides intuitive information through AR navigation. High-precision indoor maps, visual and sensor localization are all serviced over the xDM platform to provide accurate location sensing and to guide users to their destination via the best route. The AR navigation installed in the main unit delivers information on the surrounding space while giving directions. Immersed in its environment, AROUND G creates ambient intelligence whereby users are more engaged by the useful services the robot provides than by the robot itself. ■ Autonomous Driving Hybrid HD Map & R1 Based on our autonomous driving and 3D/HD mapping technology, we’re developing mapping solutions using aerial images and mobile mapping data. 3D mapping technology combines the aerial images and extracts information from the road surfaces. The lightweight mobile mapping system R1 then generates HD maps from point clouds while autonomously on the move. Compared to HD maps obtained with expensive mobile mapping systems, this hybrid HD map solution maintains high accuracy at lower cost. NAVER LABS ADAS CAM The ADAS CAM offers a suite of ADAS functions based on deep learning algorithms. The system relies on only a single camera for forward-collision warning (FCW) and lane-departure warning (LDW). In addition, the integration of hybrid HD map on the xDM platform enables functions of higher precision even in complex environments. ADAS camera modules, developed in-house, accurately gauge road conditions in a variety of circumstances with high dynamic range (HDR) and flicker free functions. ■ NAVER Maps & Wayfinding NAVER Maps & Wayfinding NAVER Maps offers common, everyday services such as location search, public transit information and driving navigation. Users are seamlessly provided with up-to-date information on indoor and outdoor spaces and, over the xDM platform, other innovative services are being developed to meet future needs. Indoor AR Navigation NAVER LABS provides indoor AR navigational information, based on user location and positioning even where there’s no GPS coverage. It utilizes indoor maps created by the mapping robot M1 on the xDM platform, and visual and sensor localization technology. Turn-by-turn directions are given with reference to POIs within the user’s visual range instead of the remaining distance they need to cover. AKI, Location & Geofencing Technology AKI is a smart watch for young children that utilizes location detection, geofencing technology and personalized positioning over the xDM platform. Based on location pattern analysis, AKI provides timely notifications of a child’s location and movements to their parents and guardians. AWAY, In-Vehicle Infotainment Platform AWAY is an infotainment platform for vehicles with a user interface that enhances driver safety and which specifically optimizes music, news and other media services for the driving environment. The AWAY head unit gives drivers simultaneous access to various functions, from media content to navigation, on a wide 24:9 ratio screen that supports split view. The platform has been deployed for vehicles operated by the Korean car sharing company Green Car. AHEAD, 3D AR HUD AHEAD is 3D AR head-up display (HUD) for vehicles. Most HUDs can be distracting for drivers due to the different focal distance between the virtual images and their actual view. Through 3D optical technology, the virtual images projected by AHEAD appear to exist on the road, allowing drivers to effortlessly perceive information. Download AHEAD brochure (PDF) ■ Robotics AMBIDEX, Robot Arm with Innovative Cable-Driven Mechanisms AMBIDEX is a robot arm resulting from collaborative R&D on human-robot coexistence. The arm uses innovative cable-driven mechanisms that make any interaction with humans safe. At just 2.6 kg (5.7 lbs), it weighs less than the average arm of a male adult. AMBIDEX can be operated at a maximum speed of 5 m/s and is capable of carrying up to 3 kg (6.6 lbs). Because AMBIDEX can be controlled to the same extent as an industrial robot, it has a wide range of applications, from simple carrying to performing complex tasks that require precise manipulation and collaboration. AMBIDEX supports high-speed, wireless, real time control from remote locations using the low latency and high throughput of 5G networks. AIRCART, Human-Power Amplification Technology The AIRCART trolley is built on robotics technology that augments human strength. The physical human-robot interaction (pHRI) makes it easy for anyone to shift heavy loads. How the user intends to move AIRCART is captured by a power sensor on the handle so controlling it is intuitive and simple from the start. Equipped with an automatic braking system, accidents are prevented when going up or down a slope. AIRCART is available for use at bookstores and factories.
NAVER is a company creating new ways for people to discover and connect. The information and services we offer are based on contextual understanding, personalization and natural interfaces. To seamlessly integrate these services into diverse life experiences, NAVER LABS is developing innovative technology in robotics, autonomous mobility and location intelligence. Learn more about us in the NAVER exhibition area at CES 2019. ■ About Company NAVER NAVER Co., Ltd. is South Korea’s largest web search engine, as well as a global ICT brand providing services that include LINE messenger, currently with over 200 million users from around the world, the SNOW video app, and the digital comics, NAVER WEBTOON. At the same time, NAVER BAND, a group SNS service, achieved a million MAU. The sustained research and development of AI, robotics, mobility, and other future technology trends are propelling NAVER forth in pursuit of the transformation and innovation of technology platforms, while also devoting itself to a shared growth paradigm together with users from the global community and a vast number of partnerships. In 2018, NAVER was ranked as top 9th most innovative company by Forbes and top 6th Future 50 company by Fortune. NAVER LABS Founded in 2013 as NAVER's research center, NAVER LABS spun off as a separate entity in 2017 to focus its research on ambient intelligence in areas such as autonomous driving, robotics, artificial intelligence and geospatial data. NAVER LABS' mission is to achieve ambient intelligence that enriches user environments with technology that proactively understands and provides them with information and services. In line with this mission, distinguished researchers from Korea and Europe are committed to understanding the places people carry out their lives to connect these locations and shape the future of mobility. ■ About CES CES® is the world's gathering place for all who thrive on the business of consumer technologies. It has served as the proving ground for innovators and breakthrough technologies for 50 years-the global stage where next-generation innovations are introduced to the marketplace. As the largest hands-on event of its kind, CES features all aspects of the industry. CES 2019 will run January 8-11, 2019 in Las Vegas, NV. ■ Booth Location Tech East, LVCC, Central Plaza – CP 14 ■ CES 2019 Innovation Awards Honorees R1, Mobile Mapping System (Vehicle intelligence and self-driving technology) AWAY, In-vehicle Infotainment Platform (In-vehicle audio/video) AHEAD, 3D AR HUD (In-vehicle audio/video) AMBIDEX, Robot Arm with Innovative Cable-Driven Mechanisms (Robotics and drones) ■ Exhibitions Learn more : Introduction of NAVER LABS’ CES 2019 exhibits xDM platform, eXtended Definition & Dimension Map The xDM platform is an integrated location and mobility solution for smartphones and autonomous machines. It combines the NAVER LABS portfolio of robot and AI-based technologies such as high definition (HD) mapping, localization and navigation with high-precision spatial data. Key features are self-updating 3D/HD mapping, precise indoor and outdoor positioning and context-aware location information based on real-time spatial data. The platform solution supports the development of future location-based services such as augmented reality (AR) walking navigation and autonomous mobility services that include autonomous vehicles, service robots and advanced driver-assistance systems (ADAS). Mapping Solutions M1, Indoor Autonomous Mapping Robot Self-Updating Map Autonomous Robots AROUND Platform, Autonomous Service Robot Platform AROUND G, Autonomous Guide Robot Autonomous Driving Hybrid HD Map & R1 ADAS CAM NAVER Maps & Wayfinding Indoor AR navigation AWAY, In-Vehicle Infotainment Platform AKI, Smart Watch for Kids AHEAD, 3D AR HUD Robotics AMBIDEX, Robot Arm with Innovative Cable-Driven Mechanisms AIRCART, Human-Power Amplification Technology ■ Demonstration Schedule (1/8-1/10) AROUND G 11:00 / 13:00 / 15:00 / 17:00 AMBIDEX 11:30 / 13:30 / 15:30 / 17:30 ■ Contact Partnership Proposal email@example.com Media Contacts Ryan Hyeonwoo Lee firstname.lastname@example.org (LINE) hlee293 Dong-keun Han email@example.com (LINE) drake3323
NAVER LABS is to begin a technological collaboration with Qualcomm, a global pioneer in advanced digital wireless communication technologies, products, and services. Starting with a memorandum of understanding with Qualcomm’s parent company, Qualcomm Technology Inc., we are going to proactively integrate various technologies from each company in a range of fields including robotics, self-driving technology, and AR, among others. Through this technological cooperation, NAVER LABS will be able to take its technologies including self-driving, IVI, robotics, precision location, and AR navigation to the next level by utilizing the know-how and solutions that Qualcomm has accumulated during its time as a leader in the global chip market. Not only that, we also expect our research on ambient intelligence to expand as a result of this partnership. Synergy may take place in the form of advancement, but it can also lead to new possibilities that did not exist before. Through the organic cooperation between the two companies, we will begin new stories of technological innovation in places that can be found in our daily lives. We will continue to share the process and outcomes of the promising work.
NAVER LABS is starting a new collaboration with SOCAR. On the 14th, NAVER LABS is signing a partnership with SOCAR to work on the Advanced Driver Assistance System (ADAS) and HD maps based on self-driving vehicle technology. We plan to apply the self-driving technology know-how we have accumulated so far in the form of ADAS to contribute to the safe operation of SOCAR. In addition, we intend to link the xDM platform that we unveiled at DEVEIW 2018 with the vehicles of SOCAR in order to render dynamic maps which show traffic conditions in real time. This will help SOCAR consumers reach their destinations in a safer and faster manner. As it is well known, SOCAR is the biggest car sharing company in Korea, directly operating around 11,000 vehicles. The large-scale data collected by the vehicles operated by SOCAR and the map information will be integrated with the technology owned by NAVER LABS to accelerate the formation of a digital twin ecosystem where real-time information on the road environment will be uploaded directly to the xDM platform. A good collaboration always brings new possibilities. NAVER LABS will continue to build innovative partnerships to develop technologies that have real-life applications, and technologies that directly address problems experienced on a daily basis.
The outcome of NAVER LABS’ research on ambient intelligence has led to its winning of four CES 2019 Innovation Awards. Every year, a judging committee comprised of industry experts including engineers and designers selects exclusive products equipped with excellent technological prowess and competitive designs to present them with the CES Innovation Awards. This year, NAVER LABS participated in three product categories, and four of its products were honored with the prestigious award. AHEAD and AWAY received awards in the in-vehicle audio/video category, while NAVER LABS R1 received an award in the vehicle intelligence and self-driving technology category, and the AMBIDEX was recognized in the robotics and drones category. AHEAD, 3D AR HUD AHEAD is a three-dimensional augmented reality head-up display (3D AR HUD) unveiled for the first time at DEVIEW 2018. Unlike conventional HUD technology, which creates an image at a single focal length, AHEAD provides driving information in the way that is more naturally integrated with the real road environment. It allows drivers to feel like the visual information really exists on the road, and more easily immerse in the various driving information provided, such as navigation instructions, front collision warnings, lane departure warnings, safety distance warnings, and so on. AWAY, in-vehicle infotainment platform AWAY is an infotainment platform for vehicles invented by NAVER LABS. It offers a range of media services optimized for the driving environment, including a UI designed for driver safety, various location-based information systems, an exclusive navigation program with a voice agent that can search destinations, Naver Music and Audio Clip, and so on. One of the defining features of the AWAY head-unit display showcased in the CES this year is the 24:9 split-view system which allows the user to simultaneously enjoy multiple functions such as media content and a navigation system without visual interferences. NAVER LABS R1, mobile mapping system NAVER LABS R1 is a mobile mapping system designed to create a hybrid high definition (HD) map for self-driving vehicles. The hybrid HD maps based on Naver’s proprietary mapping solution are HD maps created by organically integrating the information retrieved from preexisting precision aerial photographs, and the point cloud information collected by an R1 vehicle. Both the 2D and 3D data are processed with a unique algorithm that automatically extracts the features required to draw the HD maps. This reduces the production costs compared to conventional MMS devices while ensuring the same level of accuracy and recency. AMBIDEX, robot arm with innovative cable-driven mechanisms AMBIDEX is a robot arm that can safely interact with people through an innovative cable-driven power transfer mechanism. One single arm of the AMBIDEX barely weighs 2.6kg, which is lighter than a fully-grown male human arm. Despite its light weight, it can withstand 3kg of weight, and operate at a maximum speed of 5m/s. Its strength across the seven joints can be intensified simultaneously, and it can operate with precise control. Being able to develop its operative skills through deep learning, it can provide people with a range of services that directly help them. Starting on 8 January next year, NAVER LABS will be participating in CES 2019, to be held in Las Vegas, USA, where the products that won CES 2019 Innovation Awards will be introduced along with various other achievements in the field of ambient intelligence, including artificial intelligence (AI), self-driving vehicles, robotics, and so on. NAVER LABS hopes to take this opportunity to create new possibilities in the location and mobility sector with partners playing in the global stage.
AROUND G is an indoor self-driving guide robot. It drives autonomously in large-scale indoor spaces, such as shopping malls, airports, hotels, and so forth. When giving directions, it uses AR navigation technology installed in its main display to deliver location and route information in a vivid and immersive way. AROUND G can self-drive smoothly without using an expensive laser scanner device. The key to this is the xDM Cloud of the AROUND Platform, and the deep reinforcement learning algorithm programmed in its main body. The AROUND Platform is a solution that divides the fundamental functions required to achieve a self-driving robot into two parts, a mapping robot, and the xDM Cloud. Firstly, the mapping robot, M1, drives autonomously around indoor spaces to collect spatial data and, then, uploads the collected map data to the xDM Cloud. After this, the service robot utilizes the data processed in the cloud, such as map data, visual localization, path planning, and so on, to drive autonomously. An obstacle avoidance algorithm based on deep reinforcement learning is applied to the robot’s main body. It responds smoothly to spontaneous events which may occur while giving directions. That is to say, this robot can move smoothly to a destination while naturally avoiding pedestrians and other various obstacles that do not exist in the map. Our goal is to establish the use of self-driving service robots in the mainstream. We will be able to more quickly bring about a time where we can see a range of useful self-driving service robots in our daily lives, if we could continue to reduce the production costs of self-driving technology by eliminating expensive laser scanners.
Self-driving vehicles have many sensors. They drive autonomously by processing a vast amount of data collected through those sensors. There is, however, a part of self-driving vehicles that acts as both data and a sensor at the same time: that part is the HD map. There is a reason we can describe an HD map as another sensor on a self-driving vehicle. Self-driving vehicles utilize an HD map, along with other sensor data, to improve the accuracy of its location and for planning routes more effectively and safely. In this sense, an HD map is an essential element for the performance and safety of self-driving vehicles. This is why we are focusing on developing a new solution for precision machine readable HD maps that can be used in self-driving vehicles. The Hybrid HD Mapping technology we have unveiled is truly a unique solution. It is based on the organic integration of large-scale aerial photographs of each city together with data from a mobile mapping system. First, we extract information related to the layout of the road’s surface from aerial images. Then, we organically integrate a point cloud collected by R1, our proprietary lightweight mobile mapping system (MMS), which moves around that space. Compared to conventional HD maps constructed by MMS vehicles, our mapping process can reduce the production costs and lead time significantly. This can all be done while maintaining the degree of accuracy, of course. NAVER LABS is independently studying and developing self-driving vehicles and has attained a temporary permission for it from the Ministry of Land, Infrastructure, and Transport. In this regard, we can develop the Hybrid HD Mapping directly by testing and comparing our research results. We are also actively conducting research on our localization technology that utilizes HD maps. This technology allows self-driving vehicles to identify their current location accurately and safely, even in the densest parts of cities, where GPS signals are easily lost. As more diverse self-driving machines and services are introduced, the importance of HD maps will only increase. More advanced and diversified HD-based algorithms can also be expected to appear. Through the Hybrid HD Mapping technology, we hope to introduce a new HD map solution which satisfies the needs of both maintaining data accuracy and keeping production costs reasonable. See detail of NAVER LABS' autonomous driving technoloies
AHEAD is a three-dimensional augmented reality heads-up display (3D AR HUD). That is to say, it is a 3D display technology that provides information directly to a driver’s natural line of sight. With conventional HUD technology, the focal point of an informative image created by the display is not synchronized with the actual environment of the road, which could negatively affect the driver’s focus. When the driver focuses on the information displayed on a conventional HUD, their view of the road is obscured, and the converse is true as well. In order to address this issue, AHEAD utilizes 3D optical technology that provides information which appears to the driver to be integrated into the actual environment of the road. It also covers short and long distance information. Many benefits can be found when the view of the actual road appears to the driver to be synchronized with the display’s information. An imaged displayed on AHEAD looks like it actually exists on the road, which allows it to deliver information in the most natural manner. Because they do not have to adjust their focal point, the driver is able to maintain their attention and this effectively improves safety. It will also cause less eye fatigue. Furthermore, once it is integrated with precise road and map data, more accurate information will be able to be provided for an even more accurate display. The space inside vehicles and driving environments are very unique. In the future, various information and services will be integrated more and more to assist with driving and improve safety. Within that trend, AHEAD, which delivers information in a precise and safe manner without obscuring the view of the road, will be a new display solution that connects vehicles and information in the most useful and natural way possible. Download the leaflet
It is easy to get lost inside large-scale indoors spaces, like shopping malls. However, GPS does not work inside buildings, so smartphones are useless in these cases. Even if you have a map in hand, there is the problem of knowing your current location. For indoor navigation, we need to construct a precise map of the indoor space, and also develop a technology that will accurately show our current location, without using GPS. In the field test demonstration of indoor AR navigation, conducted at the COEX Mall in Seoul, NAVER LABS utilized a visual localization technology along with data from various sensors to solve the issue finding the current location. It is a technology that analyzes images using smartphone camera to identify the current location. The precise indoor map and location data constructed by the mapping robot M1 were used as the key data points for location and navigation. In addition, for an even more intuitive user experience, we applied a technology that delivers TBT (turn-by-turn) direction information through the AR. Our precision mapping technology and the visual and sensor-fusion localization technology, which utilize robots, have been developed for the purpose of providing directions and information services in indoor spaces while accurately identifying current location without having to construct a separate hardware infrastructure.
In our daily lives, there are still many unsolved problems related to space and movement between spaces. These are the problems that NAVER LABS is concerned about. Through the keynote speech given at DEVIEW 2018, we revealed our past deliberations on these issues and the results of our research. "AI: Not Artificial Intelligence, Ambient Intelligence" This was the talking point of the keynote speech. Ambient intelligence refers to “a technology that provides relevant information or actions in a timely and natural manner by recognizing and understanding the environment and its context,” and this is our technological vision. With this in mind, we unveiled the xDM Platform, an integrated location and mobility solution for people and self-driving machines. “xDM” stands for “extended definition and dimension map.” It is a combination of mapping, localization, and navigation technologies together with all the precision data we have gathered so far. It constructs precise 3D maps for indoor and outdoor environments to be used on smartphones and in self-driving machines and it has rendering technology to automatically update those maps. It offers precise measuring technology that covers indoor, outdoor, and road environments without leaving any blind spots. It also stores real-time and real-space data, generating movement information and understanding contexts. The xDM Platform, which is a combination of the aforementioned technologies, is comprised of two packages. One package is the Wayfinding Platform and it is designed to help people search for their current location and to get directions through indoor and outdoor environments. The other package is the Autonomous Mobility Platform designed for vehicles and self-driving machines. The Wayfinding Platform for People The Wayfinding Platform is a solution that allows people to move in a faster and on more convenient paths. Through a location API, this platform provides detailed location/movement information, such as smart geo-fencing, mobility pattern analysis, personalized localization, and so on, to the user. In addition, the POI information continuously updates through the road/AR navigation API, and it navigates users along the quickest routes in a fast and easy way, even inside large-scale indoor spaces where GPS functions do not work. M1, the mapping robot, can recognize the user’s current location accurately on the 3D rendered indoor map by utilizing visual and sensor-fusion localization technology without the need for separate geolocation infrastructure. It also provides turn-by-turn (TBT) information based on geographic features, and delivers navigation information more intuitively through the AR navigation API. In the keynote address, a demonstration of the AR navigation technology was performed in COEX, Seoul. Also, our plan to collaborate with premier partners, HERE and Incheon Airport Corp., was disclosed. We are waiting for more partners to collaborate with us. We also introduced scalable and semantic indoor mapping (SSIM) that automatically maintains updated indoor maps. It is a technology that automates the indoor map creation process, data collection, and maintenance processes utilizing NAVER LABS’ technologies in robotics, computer vision, visual localization, machine learning, and so on. Currently, we are focusing on the POI change detection stage where a self-driving service robot operating in indoor spaces automatically detects changes in POI, and these changes are updated on the map. In the future, this will be extended to POI recognition and semantic mapping. The same technology will be applied in self-driving technology in outdoor and on road environments. An Autonomous Mobility Platform for Self-Driving Vehicles and Robots These days, mobility solutions do not apply only to people. Soon, self-driving technology for self-driving robots, not to mention self-driving vehicles, will penetrate deeply into our daily lives. The Autonomous Mobility Platform is a solution for self-driving machines. In this keynote address, we unveiled new HD mapping technology for self-driving vehicles. An HD map is essential data required for self-driving vehicles to identify their exact location and to search for the most optimal route to a destination. NAVER LABS utilizes Hybrid HD Map solutions to create HD maps for each city by organically integrating route networks extracted from precision aerial photographs and data collected by R1, NAVER LABS’s mobile mapping system. We are implementing algorithms for both 2D and 3D data that automatically extract the features required for mapping. In addition, based on this HD map, we are developing a solution that can accurately measure locations, even in shadowy areas like city centers where GPS signals cannot reach because of the high rise buildings, by combining the map with information collected through a self-driving vehicle’s GPS sensor, IMU sensor, CAN data, LIDAR signals, and camera visuals. Furthermore, we are collaborating with Qualcomm and Mando on research for ADAS technologies in connection with Hybrid HD Maps, and various other self-driving technologies. The AROUND platform is a solution for bringing self-driving service robots to the mainstream. It utilizes precision 3D maps, created with M1, and cloud-based route search algorithms to reduce the cost of robot production while also maintaining high quality self-driving performance. Unlike conventional self-driving robots which have to perform core functions, such as map creation, location identification, route creation, obstacle avoidance, and more, by themselves, this platform can bring about indoor self-driving with a high-degree of precision with only low-cost sensors and by using a small amount of processing power. Continuing from AROUND, which was used in YES24 book stores last year, we are now developing AROUND G, a self-driving guide robot that provides direction services in large-scale indoor spaces, such as shopping malls or airports. AROUND G will be outfitted with the AR navigation API to offer directions and guide with an even more intuitive UX. Ambient Intelligence Technologies for the Present, Not the Future In this keynote, we presented NAVER LABS’ research outcomes on optical technologies. AHEAD is a 3D AR HUD (Heads-Up Display). It is uses 3D display technology to deliver information to drivers in a way that will not make them shift their focal point. Since the actual view of the road that the driver is watching has the same focal point as the display, the driver can take in location and mobility information more easily and in a more natural way. In the future, various information and services provided by the xDM Platform may be delivered naturally to drivers through AHEAD. We are also working on the sophistication of AMBIDEX, the robot arm we unveiled last year, to make it safer for interaction in daily environments. Unlike conventional robots which are primarily focused on location control, controlling strength is more important for AMBIDEX. For this reason, we have developed a simulator for kinetic and dynamic modeling. By running a simulator test before powering up the robot, we have been able to improve safety and quickly collect a vast range of data for different conditions. NAVER LABS envisions a world where tools and technologies naturally coexist with our everyday life. Our presentation on the performance outcomes and the xDM Platform, through the DEVIEW 2018 keynote address, was part of our effort to realize that vision. We wish to understand the contexts of life in every space in which humanity resides, and to develop new services and tools based on that. We believe technology should understand people, people should not necessarily have to understand technology. NAVER LABS will not stop working towards the realization of this vision, and will continue to grow together with our partners, sharing our technology, and constantly introducing new platforms.
NAVER LABS is developing a search engine based on Foursquare’s point-of-interest (POI) data to provide a global localization service. The strategic partnership uses our natural language processing (NLP) and map service technologies. Foursquare has an enormous amount of global POI data. People from around the world use Foursquare’s service to visit places for different reasons and in different contexts. By adding our know-how and technology, we want to create an advanced POI search engine adapted to each individual’s needs. We also expect to develop new business models combining the data and technology from both companies. NAVER LABS’ conducts research in ambient intelligence. It supports users by providing information through the understanding or their environment and lifestyle which is centered on location and mobility. We see no frontier concerning a user or lifestyle – each is unique. As announced in the partnership with HERE, our collaboration with Foursquare extends our ambient intelligence vision to a global scale, opening the door to new services and technologies.
NAVER LABS has signed a Memo of Understanding with HERE to develop autonomous 3D indoor maps. Key to the creation of these maps is NAVER LABS Scalable & Semantic Indoor Mapping (SSIM) technology. The development of indoor maps relies heavily on human manual work making them not only lengthy and expensive, but also difficult to keep up to date. Our advanced SSIM technology is going to provide an efficient solution to automatically update Points of Interest (POI) in indoor environments where the information changes all the time. The blueprint for autonomous indoor mapping with HERE and SSIM is as follows: A 3D high resolution map is created with the laser scanner and high-performance camera of the mapping robot M1 which moves across the indoor area Data on the indoor space is continuously collected by the AROUND service robot The data AROUND collects is then analyzed by AI technology which detects any changes in the environment and updates the service in real time. We expect this automatic solution to revolutionize how indoor maps are created and maintained. Together with HERE, we’re ahead of the proof of concept of advanced SSIM. Thanks to this project we’ll be maturing the SSIM technology and expect to develop a cornerstone for indoor map construction and the foundation of future innovations.
An image based safe lane change (SLC) algorithm is proposed to aid the lane-change maneuvers for both autonomous driving agents and human drivers. A binary classification (free or blocked) is performed to secure the safety of the ego-vehicle's surroundings before moving to a target lane. For a precise classification, the SLC uses a Convolution Neural Network (ConvNet) that learns image features from large scale dataset. ConvNet is efficient in that is able to extract subtle image features what we haven't been obtained by hand-crafted functions before; however, we also doubt the nature of the ConvNet when those of outcomes are not aligned with our intuition. In fact, we cannot handle anomalous events if we are unenlightened how ConvNet works. We know road environment changes every moment; we therefore cautiously test autonomous driving functions before deploying on the road. In other words, it is essential that understanding the internal mechanisms of the ConvNet to adapt to the autonomous driving systems. From recent weakly-supervised object localization researches, we found a clue how the ConvNet makes decisions. In this article, we would like to introduce Class Activation Mapping (CAM) and analyze where the SLC algorithm sees on images. So, what is the weekly-supervised object localization task? To solve well-defined machine learning problems, supervised learning algorithms require plenty of data points and the corresponding ground truth labels. For an image classification, a dataset consists of images and the keywords that describe the images. On the other hand, to learn a model for object detection task, we need not only the object names but the image coordinates of the objects (see Fig. 1). As a task becomes difficult, we have to consume more time and cost to build a new dataset for supervised learning setups. Thus, researchers look for new methods to apply the existing large scale dataset to different domains. For an example, weakly-supervised object localization attacks object detection task using image classification datasets, where the object localization labels are missing. Fig 1: For an image, ground truth label varies depending on the tasks: examples of the ground truth labels for image classification (left) and those for object detection (right) How to learn a model for image classification? For image classification, the architecture of the most ConvNet can be divided into two parts: convolutional layers to compute image features and fully-connected layers for classification (see Fig. 2). Fig. 2: Image features are computed with convolutional layers, and go through the fully-connected layers for a prediction. Supervised learning algorithms attempt to reduce the differences between the prediction (x) and the ground truth (y) during the training phase. We lose spatial information while reshaping an image feature to input the followed fully-connected layers. In weakly-supervised object localization task, we exploit the interim image features that computed by convolutions and obtain the salient regions for a prediction. Thus, CAM algorithm assumes that the salient regions containing many parts of a certain object will be activated during the classification. More precisely, we explain the CAM algorithm with VGG16 network architecture. The VGG16 generates (512,7,7) size of image features at the last convolution layer when it takes (3,224,224) input image. Suppose the form of the image feature that is a (7,7) sized map having 512 different channels, each channel differently contributes to classification for the given object classes. Thus, CAM algorithms learns the relative importance of the channels at the followed fully-connected layer. Using those weights, we aggregate the feature maps over the channels and finally obtain a saliency map that interprets how does the ConvNet see on the images for a prediction (see Fig. 3) Fig. 3: Since in weakly supervised object localization task, we have no information of the objects locations in the image, we cannot apply the supervised learning regime to learn a model. Instead, CAM algorithm adaptively sums the image features, where the weights are identical to the parameter of the fully-connected layer followed the convolutions. We now see the activated areas where the ConvNet focuses to predict a class. Back to the stories of the autonomous driving research To learn an SLC model, we annotated rear-side view images, which are captured in various road environments, as followed criteria: Blocked if the ego-vehicle cannot physically move to the target lane; Free if the ego-vehicle can move to the target lane; and Undefined for an ambiguous situation such as crosswalk and any other unusual scenes. The annotation rules are akin to human driver’s’ decision making processes for lane-change -- we instantly decide to move a target lane by checking rear-side view mirrors. To tolerate various driving behaviors for building the dataset, we only take a ground truth label when the multiple annotation works agree with the status of the scene. Can the SLC model make a right prediction on the road where it has not been visited? Yes, we can. To examine the generalization performances of the SLC model, we tested images which are not used during the training phase and achieved 96.98% classification accuracies. Using the CAM, we also analyzed that the SLC model has been built on our purpose. We replaced the fully-connected layers of the SLC model with a 512 length of fully-connected layer. While the parameters of the convolution are fixed, we fine-tuned SLC model on the same dataset to obtain saliency maps. As shown in Fig. 4, similar to human drivers, the SLC model looks at the space of the adjacent lanes to judge the probability to succeed lane-change. Fig. 4: The classification result of the SLC model (left), and visualization result using CAM to highlight areas for a prediction (right) The following video was recorded inside of the autonomous driving car running on complex urban road environment, where the results of the perception algorithms are also displayed on the right. The SLC algorithm deployed in the NAVER LABS autonomous driving car secures the safety for lane-change operations. References 1) S.-G. Jeong, J. Kim, S. Kim, and J. Min, End-to-end Learning of Image based Lane-Change Decision, in Proc. IEEE IV’17 2) B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, Learning Deep Features for Discriminative Localization, in Proc. IEEE CVPR’16 3) matcaffe Implementation of class activation mapping: https://github.com/metalbubble/CAM 4) Keras Implementation of class activation mapping: https://github.com/jacobgil/keras-cam
At last year’s DEVIEW, the NAVER LABS robotics team announced the 3D indoor mapping robot, M1. Since then M1 has evolved into the product AROUND which was unveiled at this year’s annual conference. AROUND has been manufactured to increase the popularity of indoor autonomous robots whose high price tag has so far prevented their penetration in the consumer market. By making them more accessible, people will be able to experience a number of indoor autonomous driving robot services in different spaces and environments. The LABS solution distributes the core functions of autonomous driving that constitute a high proportion of the manufacturing costs. Up to now, robots had to produce maps, identify locations, create of routes and avoid obstacles. NAVER LABS has allocated these requirements to different devices that work in tandem. The devices developed by LABS are AROUND, M1 and map cloud. M1 produces the map, map cloud creates the routes and AROUND focusses on accurate autonomous driving and avoiding obstacles using only low-cost sensors and little processing power. The reduction in manufacturing costs will make it possible to mass produce customised, indoor service robots that can assist people in many different places and in many different ways. AROUND is scheduled to operate for the first time at the YES 24 bookstore at the F1963 shopping complex in Busan. AROUND will collect books that customers have finished browsing in its storage unit and move them to a designated place if they exceed a certain weight. From there, employees can collect the books to put them back. The collection solves one of the most tedious chores book store employees have to deal with on a daily basis. As books are computerized in the store, if even a single book is in the wrong place, employees need to check all the surrounding books. AROUND is expected to significantly relieve staff from such painstaking work. AROUND will change the reading experience in book stores because it connects the spaces where books are displayed with where people read them. AROUND will make it possible for people to choose their books and take them to a comfortable place for browsing instead of having to look at them standing up. When they’re done, they simply put them in AROUND who will take them away. The ambient intelligence of AROUND is its integration with the user context and the cultural characteristics of space to create a better experience.
NAVER LABS, an ambient intelligence company specialized in location & mobility announced AKI at DEVIEW 2017. ‘AKI,’ a location and mobility watch device for elementary school children and parents provides safety solutions by recognizing relationships as an important factor. Parents are naturally worried or concerned about their young children when they’re not with them. They’ll often want to know if they‘ve arrived safely at school or who they’re with at different times throughout the day. Children may also need to be reassured that someone will be there to pick them up after school and when. To answer these questions a number of pieces of information need to be gathered including accurate locations and places of where people are. AKI is designed to provide parents with information on where their children are at any time and can alert them when they’re in an unhabitual place or performing unusual activities and movements. AKI utilizes Naver Labs’ own WPS (WiFi positioning system), which provides the exact position even indoors and its automatically controlled, low power location detection recognizes behaviour. It is equipped with personalized Wi-Fi fingerprinting technology. AKI detects the exact location of the child and how the child is moving with an activity detector and movement classifier. It learns the pattern of the child’s daily routine by analyzing the place, time and situation, so that it can alert parents when there is ‘abnormality’ i.e. a place that is not part of daily routine to child's. When the location of a child has been accurately identified, the information can be communicated in a natural, contextualised way. NAVER LABS strives to apply ambient intelligence to mobile user environments. AKI identifies important parts of our lives provided by location-based information. The location of a child is precious information that parents of young children naturally want to have. AKI is equipped with the ambient intelligence philosophy and technology of NAVER LABS and will be available this year.
NAVER LABS has introduced AIRCART at the YES 24 bookstore. The electronic cart delivers books from the warehouse to the store. It was named ‘AIRCART’ because the motor automatically increases its power giving the impression that the cart is gliding, even when carrying heavy objects. Equipped with an automatic breaking system, it’s safe to go up and downhill. As bookstores can be busy places, AIRCART has been designed so that cart users can easily detect if there’s sufficient space in front of the cart to prevent collisions and for the safety of small children. The shelves of the cart are tilted inwards so that more books can be loaded and that they don’t fall out. AIRCART is equipped with physical human-robot interaction (pHRI) technology, a technology used in wearable human power amplifiers. The movement of the cart (momentum and direction) is controlled in real time by identifying the user’s intentions through the power sensor on the cart handle. This makes it easy for anyone to use AIRCART with no prior experience. NAVER LABS research in location and mobility is driven by the desire to provide natural, useful every day services that impact people’s lives and its research in robotics is no exception. AROUND and AIRCART are two examples of technologies that add value to people's lives. The NAVER LABS robotics team will continue collaborating with partners and entrepreneurs so that people can profit from new ambient intelligence services and products.
AMBIDEX is a robot arm that interacts very naturally with humans. It is the fruit of a long-term research project with Korea Tech and, in particular, with professor Yong-Jae Kim, a world leader in the field and a facility equipped with the world's best robotic arm mechanism designing capabilities. Robot arms have a long history in robotic research where they have mainly been developed for manufacturing purposes focused on precision, repetition and heavy-load work. This kind of heavy, bulky robot arm is not well suited to a home setting and could even be considered dangerous. NAVER LABS work in the areas of hardware, control, recognition and intelligence aims at making the robot arm in the home a reality. AMBIDEX, one of the fruits of such research, was unveiled on stage at DEVIEW. AMBIDEX is safe for people to interact with and even lighter than a human arm. AMBIDEX uses cable-driven mechanisms that place all the heavy actuators in the shoulder and body parts. This lightens the arms and means they can be driven with wires. Using innovative mechanisms that enhance the force and strength in each joint, AMBIDEX has achieved the same level of control, performance, and precision as industrial robots. AMBIDEX aims to be a breakthrough robotic hardware solution that can work safely, flexibly and precisely with humans.
At this year’s DEVIEW, a whole range of new ambient intelligence products and technologies were revealed in the NAVER LABS keynote. Ambient intelligence technology detects and understands humans and their contexts to naturally provide information or perform actions at the time of need. During his keynote, Changhyun Song, CEO of NAVER LABS and NAVER CTO, emphasized the motivation behind the ambient intelligence research he leads. “In this world where tools and information are overflowing, technology needs to understand humans and environments even better. The real value of technology will only be realized when it has become part of the fabric of everyday life”. All of the research results shared during the keynote contribute to the NAVER LABS’ vision of ambient intelligence and we will continue to focus on technology, products and services that directly impact people. NAVER LABS envisions a future where people and society are not restricted by tools and technology. It is a world where people can focus on the things they value most in life and where ambient intelligence helps them do so.
AWAY is an infotainment platform for vehicles with a user interface that enhances driver safety and which specifically optimizes music, news and other media services for the driving environment. The AWAY head unit gives drivers simultaneous access to various functions, from media content to navigation, on a wide 24:9 ratio screen that supports split view. AWAY has been deployed for vehicles operated by the Korean car sharing company Green Car. Green Car plans to install AWAY in 3,000 vehicles within the year.
NAVER Corporation and Xerox Corporation today announced an agreement for NAVER to acquire the Xerox Research Centre Europe in Grenoble, France. The French Works Council’s consultation on this project has now been completed and the agreement is expected to close in the third quarter, subject to fulfillment of certain customary conditions. Once the sale becomes final, all 80 plus researchers and administrative staff are expected to become part of NAVER LABS. Based in Seongnam, South Korea, NAVER is Korea’s leading Internet company, operating the nation’s top search portal “NAVER,” and other innovative services in the global market such as the mobile messenger LINE, video messenger SNOW and community app BAND. And NAVER LABS is an ambient intelligence company that develops future technologies including autonomous driving, robotics and artificial intelligence. Since its establishment as NAVER’s R&D division in 2013, it has led NAVER’s innovation in technology through products such as ‘Papago’, AI-based translation app, Whale, the omni-tasking web browser, and M1, the 3D indoor mapping robot. Founded in 1993, the Xerox Research Centre Europe is located just outside Grenoble, often dubbed the Silicon Valley of Europe. The centre has focused its research in artificial intelligence (AI), machine learning, computer vision, natural language processing and ethnography. “The research expertise at the European centre is perfectly aligned with NAVER LABS’. We expect immediate, powerful synergies” said Chang-hyeon Song, CEO of NAVER LABS, and CTO of NAVER. “XRCE's world class R&D achievements in AI technology, including computer vision and machine learning, will significantly strengthen NAVER LABS’ research in ‘ambient intelligence’ including autonomous vehicles, AI/deep learning, intelligent 3D mapping, robotics and natural language processing.” With such a strong foothold in Europe, NAVER LABS expects to considerably accelerate its development of ambient intelligence technologies around the globe and in particular in AI. NAVER LABS Europe hompage
The autonomous vehicle developed by NAVER LABS was the first in South Korea's IT industry to receive a temporary operating permit from the Ministry of Land, Infrastructure and Transport in February 2017. This allowed us to add to our autonomous driving technologies by combining data on actual driving conditions with the deep learning technologies that we had already amassed. In the future, we are planning to develop safer and more convenient mobility solutions by conducting research into additional autonomous driving technologies. We will also continue to turn numerous possibilities created by the connection of cars and data into safety and convenience on actual roads.
M1 is an indoor 3D/HD mapping robot that navigates autonomously in indoor spaces. M1 automatically collects high-resolution images and 3D spatial data via high-performance cameras and LiDAR, significantly improving the efficiency of what was previously a manual mapping process. The resulting HD maps provide spatial data that is essential to location-based services.
Company overview Founded in 2013 as NAVER's research center, NAVER LABS spun off as a separate entity in 2017 to focus its research on ambient intelligence in areas such as autonomous driving, robotics, artificial intelligence and geospatial data. NAVER LABS' mission is to achieve ambient intelligence that enriches user environments with technology that proactively understands and provides them with information and services. In line with this mission, distinguished researchers from Korea and Europe are committed to understanding the places people carry out their lives to connect these locations and shape the future of mobility. Corporate media contents [Video] NAVER LABS, an Ambient Intelligence company [Video] NAVER LABS Intelligence in Mobility concept [Video] NAVER LABS Robot M1 [Video] NAVER LABS Space & Mobility Interview [Video] NAVER LABS M1 3D indoor mapping process [Video] NAVER LABS IVI (In-vehicle infotainment) [Video] NAVER LABS AROUND indoor robot [Video] NAVER LABS AMBIDEX robotic arm [Video] NAVER LABS AIRCART power secsitive cart Corporate media channel Web site Facebook Instagram Youtube SlideShare Behance