Displaying present location in the site.

Onboard Track Patrol Support System — Supporting Railway Track Inspection with Advanced Image Analysis

Sensing Technologies Underlying Social Systems: Sensing Technologies That Work Behind the Scenes

In the face of population decline brought on by a declining birthrate and rapidly aging population in recent years, labor shortages have worsened. This is as true in the railway sector as it is in other industries. Labor-saving solutions for inspections have become critical. NEC’s onboard track patrol support system facilitates the automatic detection and visualization of obstacles on or near the railway track captured in video images as the train travels along the rails. This increases efficiency and ensures safe, reliable railway service. This paper introduces NEC’s onboard track patrol system in detail, outlining the system configuration, providing application examples, and discussing future prospects.

1. Introduction

Many of Japan’s industries have been affected by a shrinking workforce resulting from a declining birthrate, an aging society, and a declining population. As a result, the railway industry has had greater difficulty acquiring the necessary maintenance staff and drivers, so labor-saving solutions and greater efficiency in operations have become all the more urgent. To solve these issues, the industry is accelerating efforts to use technology involving artificial intelligence (AI) and the Internet of Things (IoT).

Conventional track patrol refers to operations in which experienced maintenance staff travel in the front car of a train that is in service and visually monitor the environment along the railway track as well as the conditions of the vehicle and infrastructure, watching for signs of anything that might impede railway operation. This is a task usually assigned to one of the maintenance crew responsible for track maintenance, power, signals, and construction in accordance with a schedule specified by the railway company. We targeted this as a task where significant labor savings and efficiency improvements could be realized when developing our onboard track patrol support system*. Two cameras installed at the front of the train capture video images of the surrounding environment as the train moves along the railway track. The captured images are analyzed in real time to automatically assess conditions and determine whether or not there are any obstructions exceeding the required clearance. This system supports conventional track patrol, which is usually performed visually by onboard maintenance crew, to help make maintenance procedures even safer, more reliable, and more efficient. Video images of the environment along the railway track can be automatically obtained and analyzed over a wireless network simply by specifying a desired date and time for the track patrol. If an impediment is detected at a given location, the onboard track patrol support system automatically classifies the video image of that location and creates a report. This also makes it possible to significantly reduce the work required to create reports ― an additional time-consuming task usually performed manually by the maintenance staff after visual confirmation.

We will provide a technological overview of the image analysis engine of the onboard track patrol support system, followed by application examples, and finally, we will discuss future prospects.

  • *
    A part of this project has been supported by subsidies from Japan’s Ministry of Land, Infrastructure, Transport and Tourism to help the development of railway technology.

2. Technological Overview of the Image Analysis Engine

At the heart of the onboard track inspection support system is a function for structure gauge obstacle detection. The structure gauge defines the range of space, in reference to the position of the railway track, where it is prohibited to build any structure or place any object that may interfere with the operation of a train. Any foreign object present in this space be it a problem with the railway’s infrastructure, a fallen object, an animal carcass, or an overgrowth of vegetation must be eliminated to ensure the safety of train operation. Even if a foreign object is approaching the railway track for some reason, it is necessary to detect it and take appropriate measures to prevent any chance of interference with the train as it travels along the track.

This function acquires three-dimensional (3D) data from video images of the railway track and its environs captured by the cameras installed at the front of the train. This data is then analyzed to determine if a foreign object is present in the vicinity of the tracks that must be cleared. The 3D data takes the form of multiple points, each of which has a three-dimensional coordinate. These points are then aggregated into point groups. If a point is detected within the boundaries of the structure gauge, it is determined to be an obstacle.

2.1 3D reconstruction using multi-viewpoint stereo measurement

Two cameras are mounted at the very front of the train and oriented to provide comprehensive coverage of the track. The video images captured by these cameras are measured in three dimensions to provide the most accurate view possible. Because two cameras are used, stereo measurement can be used for 3D reconstruction. Furthermore, the cameras continuously shoot stereo images as the train travels along the track and acquire multi-viewpoint images of the same scene from many different positions. Then, two-dimensional (2D) images captured by the cameras can be reproduced in 3D, dramatically improving the quality of 3D estimates.

A technology called structure-from-motion (SfM) photogrammetry performs 3D contour reconstruction from video images shot by moving cameras. SfM photogrammetry is capable of not only measuring geometric contours of an object in the image but also estimating the movement of the cameras at the same time. Other ways to estimate camera movement are to use a global position system (GPS), a distal measuring instrument (DMI) that counts the number of wheel rotations to measure the traveling distance, or an inertial measurement unit (IMU) that detects the behavior of the train car. However, the camera motion information used in image analysis needs to be extremely precise, and the sensors used in these alternatives are susceptible to radio interference caused by geographic factors and can also be affected by the spinning of the train’s wheels. Greater precision can consequently be achieved by using methods that estimate the motion of the camera based on image information alone.

Operating daily, the cameras shoot full high definition (HD) images for several hours a day from the multiple trains in which they are installed. The captured images are aggregated in a data center. Because the results must be presented in a specified timeframe, an enormous amount of calculation is required for SfM photogrammetry. To optimize performance, this system uses visual simultaneous localization and mapping (SLAM) in combination with SfM. SLAM is a generic term for technology that simultaneously estimates its location and generates maps of the environment. When a camera is used as an input device, it is called visual SLAM. We are now developing a form of visual SLAM derived from SfM photogrammetry that specializes in real-time processing for the navigation of robots and self-driving vehicles.

2.2 High speed detection of obstacles in the structure gauge

When an image is input, visual SLAM is applied and high speed measurements of camera motion data and coarse 3D data are performed. By “coarse” we mean that measurements are roughly based on a group of points in which only the outline of the subject in the scene can be captured. This system is designed to determine any obstructions in the structure gauge, so it detects suspicious sections in the structure gauge where it may be obstructed without missing anything. Multi-viewpoint reconstruction is applied only to suspicious sections that have been detected. By changing the coarse data of a group of point to high density data, the system can verify whether or not there is an obstacle in the structure gauge by using detailed images. As a result, the system achieves high speed detection without compromising measurement accuracy. High density 3D restoration results are shown in Fig. 1.

Fig. 1 Examples of 3D restoration results.

Next, rails are detected in a captured image (Fig. 2 (a)). Then frames that correspond to the structure gauge are placed along the linear shapes of the rails. The system determines whether there is a group of high density points within the structure gauge. If so, the system issues an alert to indicate that an obstacle is in the structure gauge (Fig. 2 (b)).

Fig. 2 Example of structure gauge obstacle detection.

3. Case Study – Kyushu Railway Company

In April 2020, Kyushu Railway Company (JR Kyushu) began using our onboard track patrol support system. The system configuration is roughly divided into an onboard unit (OBU) and a ground unit (Figs. 3 & 4).

Fig. 3 System configuration.
Fig. 4 Onboard unit.

The OBU receives data showing the train’s location and captures track images which are then transmitted to the ground unit. Wireless LTE networks are used for communications, allowing captured images to be transmitted even while the train is traveling.

At the ground unit, servers for display and analysis servers are installed in a data center. Images are matched with information on corresponding locations and stored on the server. The collected data is processed by the image analysis engine to determine whether or not there is an obstacle. The analysis results and collected images are viewed on the monitors, and the reports are output in the offices where the maintenance staff works (Fig. 5).

Fig. 5 Example of report.

4. Future Prospects

4.1 Development of difference detection technology and use of 3D point groups

Component technologies which will be incorporated in the onboard track patrol support system in the future include difference detection, anomaly detection, and virtual reality (VR)–based track inspection.

4.1.1 Difference detection

Difference detection first records images under typical conditions (reference images) and then compares them to images taken during patrol (test images) to detect sections where differences occur (Fig. 6). This makes it possible to collect images that show conditions in which items that require maintenance are present. When these conditions are input to machine learning, differences between them will be detected from the images. In actual operations, however, items that require maintenance are so various and so diverse that it is difficult to collect them in a comprehensive manner. By defining the differences from typical conditions as candidates of phenomena to be maintained, this problem can be solved.

Fig. 6 Difference detection.

4.1.2 Anomaly detection

Just as we can intuitively perceive any deviations from the norm, the anomaly detection we are now developing will allow a deviation to be detected without precisely memorizing typical conditions. For example, humans can immediately notice that something is not right as soon as they see objects blown on or near the rails, deformed rails, and so on (Fig. 7).

Fig. 7 Anomaly detection.

To achieve this, images showing typical conditions (and a small number showing deviations) are memorized in a deep neural network. Deviations are detected as outliers from typical conditions. Like the difference detection, this technology also eliminates the need to collect rare deviations and enter them into the machine learning system, further contributing to the improved efficiency of track patrol support operations.

4.1.3 VR-based track inspection

Because of the coronavirus pandemic and the decline in the number of experienced maintenance staff, it has become critically important to reduce workloads by cutting back on activities such as on-site inspections. By constructing a VR space using the 3D measurements shown in Fig. 1, we are developing a VR-based track inspection system that makes it possible to perform the required tasks for an inspection in that same space.

In the VR space, we will achieve a user interface that offers the same usability as if you were to actually place a measurement device on a target to measure an item. We assume that measurement points will be placed on the sides of rails and in specific locations in facilities reproduced in the VR space where the distance between measurement points can be measured. For example, points encompassing a specific range of weed growth can be measured to determine how much growth has occurred.

4.2 Making this system a more readily available service

Going forward, we hope to popularize our track inspection support system as we are confident this technology has the potential to significantly improve track inspection operations at many railway companies. The current track inspection support system is offered as an on-site product. However, by minimizing initial introduction costs and offering price systems according to usage frequency and selected functions, we plan to convert the track inspection support system into a cloud-based subscription service so that more railway companies will find it easy to deploy.


  • *
    LTE is a registered trademark of European Telecommunications Standards Institute (ETSI).
  • *
    All other company names and product names that appear in this paper are trademarks or registered trademarks of their respective companies.

Authors’ Profiles

HAYASHI Masahiro
Assistant Manager
2nd City Infrastructure Solutions Division
ARIYAMA Yukitaka
Manager
2nd City Infrastructure Solutions Division
NAKAJIMA Noboru
Researcher
Biometrics Research Laboratories
Professional
Digital Solutions Division
NEC Solution Innovators
KAWASAKI Kyohei
Former Assistant Senior Researcher
Track Geometry & Maintenance Division
Railway Technical Research Institute (RTRI)
SHIMIZU Atsushi
Assistant Senior Researcher
Track Geometry & Maintenance Division
Railway Technical Research Institute (RTRI)
MIWA Masashi
Laboratory Head
Track Geometry & Maintenance Division
Railway Technical Research Institute (RTRI)

Related URL