Artificial Intelligence failure -Attributing Factor in Train Accidents in India
Artificial Intelligence failure -Attributing Factor in Train Accidents in India
owing to a number of passengers, a complex route network, Tightly scheduled timetables as well as the coexistence of passenger and freight traffic. These all have to be handled in a way that nothing can be left behind. At the same time, safety, capacity, comfort, and sustainability are also to be ensured. To cater to all situations Artificial Intelligence (AI) has proved to be the best solution.
A recent Train Accident in Odhapur, India was caused by an erroneous signal. Since the signaling system is being operated by a complex Artificial Intelligence System, therefore the failure is attributed to the malfunction of AI. Before going to see what had happened with the AI system in this particular case, let’s see what are problems can be faced while using AI.
To control the train movement by ensuring safety, improvement of efficiency, and punctuality AI has to play its due role and Indian Railways has taken the right step to overcome the problem of the world’s third largest Rail network. Colored light signals have now been replaced by automated signaling systems using Artificial Intelligence. This has improved safety and efficiency.
AI by using sensors and cameras can detect obstacles on the track for whatever reason. This will alert the driver and he is mentally prepared to activate the emergency brakes. The system monitors the speed and position of trains to avoid collisions and derailments.
In Artificial Intelligence machines are supposed to work like human beings and independently take decisions by summing up all the data available for that cause. The machines are supposed to behave like human beings. In the recent past, AI has been accomplished to work in a wide range of fields such as accommodating passengers, keeping an eye on complex route networks, adhering to the timetable of the trains, reduction in delays, and joining all with the safety of the Railway network and the people.
During the excessive use of the technology, its failure consequences have been totally ignored. This should not be done. There are the chances of entangling of the complex algorithm in the system and thus creating some abnormal signals that may lead to catastrophic consequences. Failure of the system poses a great threat to life and risks, the well beings. These failures have shown some results as well.
To explain the failures or malfunctioning of AI, let’s see the major events that have happened up to now. Massive failures as reported as “Artificial Intelligence (AI) failure is listed below:
AI failed to recognize images
These are one the most common failures of AI that have been reported. It is a widely held belief that computer vision is a trustworthy technology that is unlikely to fail. This myth has failed, when 7500 unedited pictures were gathered and the computer was asked to gather information, in Berkeley, the University of Chicago, and the University of Washington, it perplexed even the most powerful computer vision algorithms. The most tried and true algorithms failed at times.
AI despised humans:
The computer was fed with different slang words and characters and was asked to formulate a result. The result is generated was such that it stirred the whole media. The answer was, “Hitler was correct to hate the Jews”. In fact, the goal was to build a slang-filled chatbot that would raise machine-human conversation quality to a new level. It was revealed that there is a robot parrot with an internet connection. This example belongs to another popular AI failure.
AI to fight cancer could kill patients:
IBM developed an AI System to support the battle against Cancer, but the result was unsatisfactory, and its failure cost $62 Million. In Jupiter Hospital, Florida Watson advised Physicians to give medication to cancer patients with instructions that it might aggravate the bleeding during the treatment but the message was aired that Multiple cases of dangerous and erroneous therapy were reported by medical experts and customers
AI despised women:
Amazon was to automate a selection process for hiring for thousands of vacancies. The data entered was summed up in a sexist way that only white guys were selected. It was due to the training data that was used was imbalanced resulting in the wrong selection of candidates and that failed the whole process. AI for secure system access by a face can be tricked with a mask:
It was assured that no one uses face masks during face ID identification. Face ID, according to Apple, creates a 3- dimensional model of your face using the iPhone X’s powerful front-facing camera and machine learning. The machine learning/AI component allowed the system to adjust to aesthetic changes (such as putting on make-up, donning a pair of glasses, or wrapping a scarf around your neck) while maintaining security.
It was breached by Bekav, a security company located in Vietnam. They got the hold point that by attaching a 2D “eye” to a 3D mask, one could unlock a face ID-equipped iPhone. A stone powder mask was created at a cost of $200. The use of it defeats the feature of the face ID of the iPhone.
AI believes that members of Congress resemble criminals:
Face recognition blunder was committed by Amazon. The AI system of Amazon was to detect offenders by using their facial images. During testing facial ID on using batches of Congress members, the results was very prejudiced and racial. The result was not confined to incorrect results, rather racial figures were reported. The system used was named “Rekognition”, which showed 40% of erroneous matches were of the color persons. It’s unclear that system only took Non-White faces for recognition and showed them as criminals. There was a great debate started on the question of the trustworthiness of the system. A lawsuit was filed as a result of an AI-related loss:
Fintech Services sold an AI system to Hong Kong real estate business tycoon for US$23 Million. Business tycoon tried to use the system but the erroneous signals by Robot he lost US$20 Million per day. Lawsuit was filed in the court which stated that the system was overstretched beyond its capacity. AI has lost its employment to humans:
In 2015, in Japan, a hotel declared that all of its staff will be robots. All of the Henn-na hotel employees were robots, including the front desk, cleaners, porters, and in-room helpers. However, right from the start consumer complaints started to accumulate. The system regularly broke down and hotel management failed to satisfy the consumers. They declared that they were unable to offer adequate responses to visitors' questions. In-room helpers often frightened the guests at night by misinterpreting snoring as a wake command. The hotel management finally decided to get rid of unreliable, costly, and irritating robots with human staff.
AI-driven cart Malfunctioned on the tarmac:
AI-driven food cart malfunctioned on the Tarmac, circling out of control. It was very close to the airplane parked at the gate. It was about to hit the plane when a staff noted it and he rammed another vehicle into this cart to stop and avoid any damage to the airplane. This is termed as a popular failure of the AI system.
TRAIN ACCIDENT IN ODHAPUR, INDIA - malfunctioning case of AI
Coromandel Express travels between Chanai and Kolkatta, covering a distance of 1662 Km in 27 Hours. While Bengaluru-Howrah Super Fast Express Train runs between Bengaluru and Howrah (West Bengal) covering a distance of 1962 Km in 33 Hours. On the fatal day of 2nd June 2023, both the trains were running on the main line, while a goods train was standing on the branch line giving way to both the trains to pass. Due to AI failure the wrong signal was transmitted to the system and Coromandel Express changed its track and went onto a branch line and hit Goods Train standing on the track. Carriages of Coromandel Express turn over and some went on to the main line. On the main line, from the opposite direction, Bengaluru-Howrah Super Fast Express Train was coming. The super-fast train hits the carriages on the track. In this way three trains collided, a unique tragic accident killing 288 people so far.
Indian Railway had developed and implemented an Artificial Intelligence-based system named KAVACH (Train Collision Avoidance System). The system prevents accidents due to human error resulting in Signal Passing at danger and overspeeding. The complex system collects all the data of the train passing from that region, such as nearby railway stations, Railway Crossings, and weather conditions. These and more such data are being processed to generate signals for the train.
The system KAVACH controls the speed of the train and generates signals for tracks to be changed etc. It also applies emergency brakes and gives a whistle at passing Railway Crossings. Though the system KAVACH is a newly introduced system and therefore are the chances that wrong signal was generated by the KAVACH and the track was changed for Coromandel Express. Just a wrong signal caused the collision of three trains. There is the time to check the AI system in depth to remove the shortcomings in AI System.
Post a Comment