NAVER is a company creating new ways for people to discover and connect. The information and services we offer are based on contextual understanding, personalization and natural interfaces. To seamlessly integrate these services into diverse life experiences, NAVER LABS is developing innovative technology in robotics, autonomous mobility and location intelligence. Learn more about us in the NAVER exhibition area at CES 2019. ■ About Company NAVER NAVER Co., Ltd. is South Korea’s largest web search engine, as well as a global ICT brand providing services that include LINE messenger, currently with over 200 million users from around the world, the SNOW video app, and the digital comics, NAVER WEBTOON. At the same time, NAVER BAND, a group SNS service, achieved a million MAU. The sustained research and development of AI, robotics, mobility, and other future technology trends are propelling NAVER forth in pursuit of the transformation and innovation of technology platforms, while also devoting itself to a shared growth paradigm together with users from the global community and a vast number of partnerships. In 2018, NAVER was ranked as top 9th most innovative company by Forbes and top 6th Future 50 company by Fortune. NAVER LABS Founded in 2013 as NAVER's research center, NAVER LABS spun off as a separate entity in 2017 to focus its research on ambient intelligence in areas such as autonomous driving, robotics, artificial intelligence and geospatial data. NAVER LABS' mission is to achieve ambient intelligence that enriches user environments with technology that proactively understands and provides them with information and services. In line with this mission, distinguished researchers from Korea and Europe are committed to understanding the places people carry out their lives to connect these locations and shape the future of mobility. ■ About CES CES® is the world's gathering place for all who thrive on the business of consumer technologies. It has served as the proving ground for innovators and breakthrough technologies for 50 years-the global stage where next-generation innovations are introduced to the marketplace. As the largest hands-on event of its kind, CES features all aspects of the industry. CES 2019 will run January 8-11, 2019 in Las Vegas, NV. ■ Booth Location Tech East, LVCC, Central Plaza – CP 14 ■ CES 2019 Innovation Awards Honorees R1, Mobile Mapping System (Vehicle intelligence and self-driving technology) AWAY, In-vehicle Infotainment Platform (In-vehicle audio/video) AHEAD, 3D AR HUD (In-vehicle audio/video) AMBIDEX, Robot Arm with Innovative Cable-Driven Mechanisms (Robotics and drones) ■ Exhibitions xDM platform, eXtended Definition & Dimension Map The xDM platform is an integrated location and mobility solution for smartphones and autonomous machines. It combines the NAVER LABS portfolio of robot and AI-based technologies such as high definition (HD) mapping, localization and navigation with high-precision spatial data. Key features are self-updating 3D/HD mapping, precise indoor and outdoor positioning and context-aware location information based on real-time spatial data. The platform solution supports the development of future location-based services such as augmented reality (AR) walking navigation and autonomous mobility services that include autonomous vehicles, service robots and advanced driver-assistance systems (ADAS). Mapping Solutions M1, Indoor Autonomous Mapping Robot Self-Updating Map Autonomous Robots AROUND Platform, Autonomous Service Robot Platform AROUND G, Autonomous Guide Robot Autonomous Driving Hybrid HD Map & R1 ADAS CAM NAVER Maps & Wayfinding Indoor AR navigation AWAY, In-Vehicle Infotainment Platform AKI, Smart Watch for Kids AHEAD, 3D AR HUD Robotics AMBIDEX, Robot Arm with Innovative Cable-Driven Mechanisms AIRCART, Human-Power Amplification Technology ■ Demonstration Schedule (1/8-1/10) AROUND G 11:00 / 13:00 / 15:00 / 17:00 AMBIDEX 11:30 / 13:30 / 15:30 / 17:30 ■ Contact Partnership Proposal firstname.lastname@example.org Media Contacts Ryan Hyeonwoo Lee email@example.com (LINE) hlee293 Dong-keun Han firstname.lastname@example.org (LINE) drake3323
NAVER LABS is developing a search engine based on Foursquare’s point-of-interest (POI) data to provide a global localization service. The strategic partnership uses our natural language processing (NLP) and map service technologies. Foursquare has an enormous amount of global POI data. People from around the world use Foursquare’s service to visit places for different reasons and in different contexts. By adding our know-how and technology, we want to create an advanced POI search engine adapted to each individual’s needs. We also expect to develop new business models combining the data and technology from both companies. NAVER LABS’ conducts research in ambient intelligence. It supports users by providing information through the understanding or their environment and lifestyle which is centered on location and mobility. We see no frontier concerning a user or lifestyle – each is unique. As announced in the partnership with HERE, our collaboration with Foursquare extends our ambient intelligence vision to a global scale, opening the door to new services and technologies.
NAVER LABS has signed a Memo of Understanding with HERE to develop autonomous 3D indoor maps. Key to the creation of these maps is NAVER LABS Scalable & Semantic Indoor Mapping (SSIM) technology. The development of indoor maps relies heavily on human manual work making them not only lengthy and expensive, but also difficult to keep up to date. Our advanced SSIM technology is going to provide an efficient solution to automatically update Points of Interest (POI) in indoor environments where the information changes all the time. The blueprint for autonomous indoor mapping with HERE and SSIM is as follows: A 3D high resolution map is created with the laser scanner and high-performance camera of the mapping robot M1 which moves across the indoor area Data on the indoor space is continuously collected by the AROUND service robot The data AROUND collects is then analyzed by AI technology which detects any changes in the environment and updates the service in real time. We expect this automatic solution to revolutionize how indoor maps are created and maintained. Together with HERE, we’re ahead of the proof of concept of advanced SSIM. Thanks to this project we’ll be maturing the SSIM technology and expect to develop a cornerstone for indoor map construction and the foundation of future innovations.
An image based safe lane change (SLC) algorithm is proposed to aid the lane-change maneuvers for both autonomous driving agents and human drivers. A binary classification (free or blocked) is performed to secure the safety of the ego-vehicle's surroundings before moving to a target lane. For a precise classification, the SLC uses a Convolution Neural Network (ConvNet) that learns image features from large scale dataset. ConvNet is efficient in that is able to extract subtle image features what we haven't been obtained by hand-crafted functions before; however, we also doubt the nature of the ConvNet when those of outcomes are not aligned with our intuition. In fact, we cannot handle anomalous events if we are unenlightened how ConvNet works. We know road environment changes every moment; we therefore cautiously test autonomous driving functions before deploying on the road. In other words, it is essential that understanding the internal mechanisms of the ConvNet to adapt to the autonomous driving systems. From recent weakly-supervised object localization researches, we found a clue how the ConvNet makes decisions. In this article, we would like to introduce Class Activation Mapping (CAM) and analyze where the SLC algorithm sees on images. So, what is the weekly-supervised object localization task? To solve well-defined machine learning problems, supervised learning algorithms require plenty of data points and the corresponding ground truth labels. For an image classification, a dataset consists of images and the keywords that describe the images. On the other hand, to learn a model for object detection task, we need not only the object names but the image coordinates of the objects (see Fig. 1). As a task becomes difficult, we have to consume more time and cost to build a new dataset for supervised learning setups. Thus, researchers look for new methods to apply the existing large scale dataset to different domains. For an example, weakly-supervised object localization attacks object detection task using image classification datasets, where the object localization labels are missing. Fig 1: For an image, ground truth label varies depending on the tasks: examples of the ground truth labels for image classification (left) and those for object detection (right) How to learn a model for image classification? For image classification, the architecture of the most ConvNet can be divided into two parts: convolutional layers to compute image features and fully-connected layers for classification (see Fig. 2). Fig. 2: Image features are computed with convolutional layers, and go through the fully-connected layers for a prediction. Supervised learning algorithms attempt to reduce the differences between the prediction (x) and the ground truth (y) during the training phase. We lose spatial information while reshaping an image feature to input the followed fully-connected layers. In weakly-supervised object localization task, we exploit the interim image features that computed by convolutions and obtain the salient regions for a prediction. Thus, CAM algorithm assumes that the salient regions containing many parts of a certain object will be activated during the classification. More precisely, we explain the CAM algorithm with VGG16 network architecture. The VGG16 generates (512,7,7) size of image features at the last convolution layer when it takes (3,224,224) input image. Suppose the form of the image feature that is a (7,7) sized map having 512 different channels, each channel differently contributes to classification for the given object classes. Thus, CAM algorithms learns the relative importance of the channels at the followed fully-connected layer. Using those weights, we aggregate the feature maps over the channels and finally obtain a saliency map that interprets how does the ConvNet see on the images for a prediction (see Fig. 3) Fig. 3: Since in weakly supervised object localization task, we have no information of the objects locations in the image, we cannot apply the supervised learning regime to learn a model. Instead, CAM algorithm adaptively sums the image features, where the weights are identical to the parameter of the fully-connected layer followed the convolutions. We now see the activated areas where the ConvNet focuses to predict a class. Back to the stories of the autonomous driving research To learn an SLC model, we annotated rear-side view images, which are captured in various road environments, as followed criteria: Blocked if the ego-vehicle cannot physically move to the target lane; Free if the ego-vehicle can move to the target lane; and Undefined for an ambiguous situation such as crosswalk and any other unusual scenes. The annotation rules are akin to human driver’s’ decision making processes for lane-change -- we instantly decide to move a target lane by checking rear-side view mirrors. To tolerate various driving behaviors for building the dataset, we only take a ground truth label when the multiple annotation works agree with the status of the scene. Can the SLC model make a right prediction on the road where it has not been visited? Yes, we can. To examine the generalization performances of the SLC model, we tested images which are not used during the training phase and achieved 96.98% classification accuracies. Using the CAM, we also analyzed that the SLC model has been built on our purpose. We replaced the fully-connected layers of the SLC model with a 512 length of fully-connected layer. While the parameters of the convolution are fixed, we fine-tuned SLC model on the same dataset to obtain saliency maps. As shown in Fig. 4, similar to human drivers, the SLC model looks at the space of the adjacent lanes to judge the probability to succeed lane-change. Fig. 4: The classification result of the SLC model (left), and visualization result using CAM to highlight areas for a prediction (right) The following video was recorded inside of the autonomous driving car running on complex urban road environment, where the results of the perception algorithms are also displayed on the right. The SLC algorithm deployed in the NAVER LABS autonomous driving car secures the safety for lane-change operations. References 1) S.-G. Jeong, J. Kim, S. Kim, and J. Min, End-to-end Learning of Image based Lane-Change Decision, in Proc. IEEE IV’17 2) B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, Learning Deep Features for Discriminative Localization, in Proc. IEEE CVPR’16 3) matcaffe Implementation of class activation mapping: https://github.com/metalbubble/CAM 4) Keras Implementation of class activation mapping: https://github.com/jacobgil/keras-cam
At this year’s DEVIEW, a whole range of new ambient intelligence products and technologies were revealed in the NAVER LABS keynote. Ambient intelligence technology detects and understands humans and their contexts to naturally provide information or perform actions at the time of need. During his keynote, Changhyun Song, CEO of NAVER LABS and NAVER CTO, emphasized the motivation behind the ambient intelligence research he leads. “In this world where tools and information are overflowing, technology needs to understand humans and environments even better. The real value of technology will only be realized when it has become part of the fabric of everyday life”. NAVER LABS shared advances made since last year on their ambient intelligence platform CLOVA (previously announced as ‘AMICA’), its neural machine translation technology PAPAGO, and web browser WHALE. The theme was clearly focused on ‘ambient intelligence touching daily life through advances in location and mobility research’. 1. Location intelligence M1, Map cloud and AROUND to popularize autonomous driving robot services The mapping robot M1, showcased a 3D point cloud of the COEX and Lotte World Mall indoor space in Seoul, the DEVIEW venue. M1, which was unveiled at last year’s event, has been improved with Epipolar’s mapping technology. The robot captures the data of a space and uses it to make high definition 3D indoor maps. M1 has also evolved into the product ‘AROUND’ which NAVER LABS believes will popularize indoor autonomous service robots. The high price tag for such robots has prevented its penetration in the consumer market due to the functionalities required for autonomous driving such as map creation, position identification, route setting and obstacle avoidance. NAVER LABS have developed a solution that separates the map creation, carried out by M1, from the route creation and setting done by accessing the map on the map cloud. This makes the robot a whole lot cheaper because AROUND is able to carry out accurate autonomous driving and successfully avoid obstacles using only low-cost sensors and little processing power. This solution significantly reduces production costs and opens up the possibility of mass manufacturing of robots and the ability to assist people in whole new ways. It can also be easily customized according to the characteristics and the spaces in which it operates as well as the kind of application it will perform. AROUND can be seen in action in the YES24 book store in the Busan shopping complex F1963, helping customers and employees browse and organize books. AKI, the device of ambient intelligence specialized in location determination NAVER LABS released AKI, an ambient intelligence device specialized in localization. It is targeted at parents with young children between 6 and 8 years old. AKI is designed to let parents know where their children are at any time and can alert them when their child is in an unusual place or performing unusual activities and movements. To determine the location of the device wearer, AKI contains an integrated Wi-Fi Protected Setup (WPS) which provides the exact indoor or outdoor position, an automatically controlled low power location detection which recognizes behavior, and personalized Wi-Fi fingerprinting technology. It detects the exact location of the child and how the child is moving with an activity detector and movement classifier. It learns the pattern of the child’s routine movements by analyzing the place, time and situation, so that it can alert parents when there is unaccustomed movement i.e. a place or activity that is not habitual in the child’s routine. AKI will be commercialized in the near future. 2. Mobility intelligence The first four wheel balancing electronic skateboard The robotics team is working on a number of exciting mobility projects. They presented the world’s first 4-wheel electronic balancing skateboard to complete the ‘personal last-mile’ of a journey. The robotics team is working on a number of exciting mobility projects. They presented the world’s first 4-wheel electronic balancing skateboard to complete the ‘personal last-mile’ of a journey. AIRCART physical human-robot interaction technology The AIRCART began as a project to simply alleviate tedious chores. It uses physical human-robot interaction (pHRI) technology, a technology used in wearable human power amplifiers. The AIRCART is easy to use even for a novice. Its movements, direction and speed are controlled via the ‘power sensor’ on the handle which communicates the user’s intentions. Equipped with an automatic breaking system, it’s safe going uphill and downhill, the latter often being the more dangerous. Together with AROUND, AIRCART is being used at the YES24 book store. AWAY manufacturing tool kit and extension plans The in-vehicle infotainment platform AWAY was announced with plans to provide an open toolkit. AWAY naturally connects and safely delivers vehicle, road and traffic information as well as audio and video entertainment. The open toolkit will make it easy for content and service providers as well as manufacturers to integrate, customize and personalize services. From next year, it will be supported as an open platform, and released as a standalone commercial product shortly afterwards. Autonomous driving vehicles aim for SAE level 4 in cities this year The autonomous driving car research is currently reinforcing its lane-based (self) position recognition technology to be able to work in GPS invisible areas in cities. In collaboration with KAIST (Korea Advanced Institute of Science and Technology), NAVER LABS has started research on rapid automatic information extraction from roads and signs in large urban areas. The DEVIEW video demonstrated the progress towards acquiring SAE level 4 in cities before the end of 2017. 3. Academic-Industrial collaboration to realize ambient intelligence in the mid/long term Research on mobility in living environments includes the challenge of getting up and down stairs During the keynote presentation devoted to robotics, several significant research breakthroughs were revealed in medium and long-term ambient intelligence solutions. To achieve the goal of NAVER LABS to make robots a part of everyday life, they need to be able to freely move around homes and other living spaces, understand people’s habits and provide extra hands and arms that can provide physical aid and other kinds of services. The research conducted to help achieve these goals is carried out in parallel at NAVER LABS and in partnership with universities. Examples of these partnerships include the Cheetah 3 robot with MIT and the Jumping Robot with UIUC which both demonstrate ongoing, long-term research on robot legs aimed at enabling robots to climb up and down stairs and over obstacles. With the same objective, the NAVER LABS Tuskbot and TT-bot projects were shared. Tuskbot is a stair climbing robots with 4 wheels and TT-bot identifies objects and autonomously drives around them. These projects were born from internships. AMBIDEX, the robot arm that touches everyday lives AMBIDEX is a robot arm that interacts very naturally with humans. It is the fruit of a long-term research project with Korea Tech and, in particular, with professor Yong-Jae Kim, a world leader in the field and a facility equipped with the world's best robotic arm mechanism designing capabilities. Robot arms have a long history in robotic research where they have mainly been developed for manufacturing purposes focused on precision, repetition and heavy-load work. This kind of heavy, bulky robot arm is not well suited to a home setting and could even be considered dangerous. NAVER LABS work in the areas of hardware, control, recognition and intelligence aims at making the robot arm in the home a reality. AMBIDEX, one of the fruits of such research, was unveiled on stage at DEVIEW. AMBIDEX is safe for people to interact with and even lighter than a human arm. AMBIDEX uses cable-driven mechanisms that place all the heavy actuators in the shoulder and body parts. This lightens the arms and means they can be driven with wires. Using innovative mechanisms that enhance the force and strength in each joint, AMBIDEX has achieved the same level of control, performance, and precision as industrial robots. AMBIDEX aims to be a breakthrough robotic hardware solution that can work safely, flexibly and precisely with humans. All of the research results shared during the keynote contribute to the NAVER LABS’ vision of ambient intelligence and we will continue to focus on technology, products and services that directly impact people. NAVER LABS envisions a future where people and society are not restricted by tools and technology. It is a world where people can focus on the things they value most in life and where ambient intelligence helps them do so.
AROUND will increase the popularity and adoption of indoor robots At last year’s DEVIEW, the NAVER LABS robotics team announced the 3D indoor mapping robot, M1. Since then M1 has evolved into the product AROUND which was unveiled at this year’s annual conference. AROUND has been manufactured to increase the popularity of indoor autonomous robots whose high price tag has so far prevented their penetration in the consumer market. By making them more accessible, people will be able to experience a number of indoor autonomous driving robot services in different spaces and environments. The LABS solution distributes the core functions of autonomous driving that constitute a high proportion of the manufacturing costs. Up to now, robots had to produce maps, identify locations, create of routes and avoid obstacles. NAVER LABS has allocated these requirements to different devices that work in tandem. The devices developed by LABS are AROUND, M1 and map cloud. M1 produces the map, map cloud creates the routes and AROUND focusses on accurate autonomous driving and avoiding obstacles using only low-cost sensors and little processing power. The reduction in manufacturing costs will make it possible to mass produce customised, indoor service robots that can assist people in many different places and in many different ways. Technology that solves issues and changes the experience AROUND is scheduled to operate for the first time at the YES 24 bookstore at the F1963 shopping complex in Busan. AROUND will collect books that customers have finished browsing in its storage unit and move them to a designated place if they exceed a certain weight. From there, employees can collect the books to put them back. The collection solves one of the most tedious chores book store employees have to deal with on a daily basis. As books are computerized in the store, if even a single book is in the wrong place, employees need to check all the surrounding books. AROUND is expected to significantly relieve staff from such painstaking work. AROUND will change the reading experience in book stores because it connects the spaces where books are displayed with where people read them. AROUND will make it possible for people to choose their books and take them to a comfortable place for browsing instead of having to look at them standing up. When they’re done, they simply put them in AROUND who will take them away. The ambient intelligence of AROUND is its integration with the user context and the cultural characteristics of space to create a better experience. AIRCART - a physical human-robot interaction technology In addition to AROUND, NAVER LABS has introduced AIRCART at the YES 24 bookstore. The electronic cart delivers books from the warehouse to the store. It was named ‘AIRCART’ because the motor automatically increases its power giving the impression that the cart is gliding, even when carrying heavy objects. Equipped with an automatic breaking system, it’s safe to go up and downhill. As bookstores can be busy places, AIRCART has been designed so that cart users can easily detect if there’s sufficient space in front of the cart to prevent collisions and for the safety of small children. The shelves of the cart are tilted inwards so that more books can be loaded and that they don’t fall out. AIRCART is equipped with physical human-robot interaction (pHRI) technology, a technology used in wearable human power amplifiers. The movement of the cart (momentum and direction) is controlled in real time by identifying the user’s intentions through the power sensor on the cart handle. This makes it easy for anyone to use AIRCART with no prior experience. NAVER LABS research in space and mobility is driven by the desire to provide natural, useful every day services that impact people’s lives and its research in robotics is no exception. AROUND and AIRCART are two examples of technologies that add value to people's lives. The NAVER LABS robotics team will continue collaborating with partners and entrepreneurs so that people can profit from new ambient intelligence services and products.
NAVER LABS, an ambient intelligence company specialized in location & mobility announced AKI at DEVIEW 2017. ‘AKI,’ a location and mobility watch device for elementary school children and parents provides safety solutions by recognizing relationships as an important factor. Parents are naturally worried or concerned about their young children when they’re not with them. They’ll often want to know if they‘ve arrived safely at school or who they’re with at different times throughout the day. Children may also need to be reassured that someone will be there to pick them up after school and when. To answer these questions a number of pieces of information need to be gathered including accurate locations and places of where people are. AKI is designed to provide parents with information on where their children are at any time and can alert them when they’re in an unhabitual place or performing unusual activities and movements. AKI utilizes Naver Labs’ own WPS (WiFi positioning system), which provides the exact position even indoors and its automatically controlled, low power location detection recognizes behaviour. It is equipped with personalized Wi-Fi fingerprinting technology. AKI detects the exact location of the child and how the child is moving with an activity detector and movement classifier. It learns the pattern of the child’s daily routine by analyzing the place, time and situation, so that it can alert parents when there is ‘abnormality’ i.e. a place that is not part of daily routine to child's. When the location of a child has been accurately identified, the information can be communicated in a natural, contextualised way. NAVER LABS strives to apply ambient intelligence to mobile user environments. AKI identifies important parts of our lives provided by location-based information. The location of a child is precious information that parents of young children naturally want to have. AKI is equipped with the ambient intelligence philosophy and technology of NAVER LABS and will be available this year.
NAVER LABS is an 'ambient intelligence company'. Through our research we develop technologies and products that provide people with information and services that are integrated and adapted to their changing contexts. In a world where technology and tools are omnipresent, our vision is to help people focus on what they care about most in life. Company overview Founder: Song Chang-Hyun Created: January 2017 Vision: Ambient intelligence Corporate media contents [Video] NAVER LABS, an Ambient Intelligence company [Video] NAVER LABS Intelligence in Mobility concept [Video] NAVER LABS Robot M1 [Video] NAVER LABS Space & Mobility Interview [Video] NAVER LABS M1 3D indoor mapping process [Video] NAVER LABS IVI (In-vehicle infotainment) [Video] NAVER LABS AROUND indoor robot [Video] NAVER LABS AMBIDEX robotic arm [Video] NAVER LABS AIRCART power secsitive cart Corporate media channel Web site Facebook Instagram Youtube SlideShare Behance