fbpx
Connect with us

Impact

US probing Autopilot problems on 765,000 Tesla vehicles

Published

 on

US probing Autopilot problems on 765,000 Tesla vehicles

The U.S. government has opened a formal investigation into Tesla’s Autopilot partially automated driving system after a series of collisions with parked emergency vehicles.

The investigation covers 765,000 vehicles, almost everything that Tesla has sold in the U.S. since the start of the 2014 model year. Of the crashes identified by the National Highway Traffic Safety Administration as part of the probe, 17 people were injured and one was killed.

NHTSA says it has identified 11 crashes since 2018 in which Teslas on Autopilot or Traffic Aware Cruise Control have hit vehicles at scenes where first responders have used flashing lights, flares, an illuminated arrow board or cones warning of hazards. The agency announced the action Monday in a posting on its website.

The probe is another sign that NHTSA under President Joe Biden is taking a tougher stance on automated vehicle safety than under previous administrations. Previously the agency was reluctant to regulate the new technology for fear of hampering adoption of the potentially life-saving systems.

The investigation covers Tesla’s entire current model lineup, the Models Y, X, S and 3 from the 2014 through 2021 model years.

The National Transportation Safety Board, which also has investigated some of the Tesla crashes dating to 2016, has recommended that NHTSA and Tesla limit Autopilot’s use to areas where it can safely operate. The NTSB also recommended that NHTSA require Tesla to have a better system to make sure drivers are paying attention. NHTSA has not taken action on any of the recommendations. The NTSB has no enforcement powers and can only make recommendations to other federal agencies.

“Today’s action by NHTSA is a positive step forward for safety,” NTSB Chair Jennifer L. Homendy said in a statement Monday. “As we navigate the emerging world of advanced driving assistance systems, it’s important that NHTSA has insight into what these vehicles can, and cannot, do.”

Last year the NTSB blamed Tesla, drivers and lax regulation by NHTSA for two collisions in which Teslas crashed beneath crossing tractor-trailers. The NTSB took the unusual step of accusing NHTSA of contributing to the crash for failing to make sure automakers put safeguards in place to limit use of electronic driving systems.

The agency made the determinations after investigating a 2019 crash in Delray Beach, Florida, in which the 50-year-old driver of a Tesla Model 3 was killed. The car was driving on Autopilot when neither the driver nor the Autopilot system braked or tried to avoid a tractor-trailer crossing in its path.

“We are glad to see NHTSA finally acknowledge our long standing call to investigate Tesla for putting technology on the road that will be foreseeably misused in a way that is leading to crashes, injuries, and deaths,” said Jason Levine, executive director of the nonprofit Center for Auto Safety, an advocacy group. “If anything, this probe needs to go far beyond crashes involving first responder vehicles because the danger is to all drivers, passengers, and pedestrians when Autopilot is engaged.”

Autopilot has frequently been misused by Tesla drivers, who have been caught driving drunk or even riding in the back seat while a car rolled down a California highway.

A message was left seeking comment from Tesla, which has disbanded its media relations office. Shares of Tesla Inc., based in Palo Alto, California, fell 4.3% Monday.

NHTSA has sent investigative teams to 31 crashes involving partially automated driver assist systems since June of 2016. Such systems can keep a vehicle centered in its lane and a safe distance from vehicles in front of it. Of those crashes, 25 involved Tesla Autopilot in which 10 deaths were reported, according to data released by the agency.

Tesla and other manufacturers warn that drivers using the systems must be ready to intervene at all times. In addition to crossing semis, Teslas using Autopilot have crashed into stopped emergency vehicles and a roadway barrier.

The probe by NHTSA is long overdue, said Raj Rajkumar, an electrical and computer engineering professor at Carnegie Mellon University who studies automated vehicles.

Tesla’s failure to effectively monitor drivers to make sure they’re paying attention should be the top priority in the probe, Rajkumar said. Teslas detect pressure on the steering wheel to make sure drivers are engaged, but drivers often fool the system.

“It’s very easy to bypass the steering pressure thing,” Rajkumar said. “It’s been going on since 2014. We have been discussing this for a long time now.”

The crashes into emergency vehicles cited by NHTSA began on Jan. 22, 2018 in Culver City, California, near Los Angeles when a Tesla using Autopilot struck a parked firetruck that was partially in the travel lanes with its lights flashing. Crews were handling another crash at the time.

Since then, the agency said there were crashes in Laguna Beach, California; Norwalk, Connecticut; Cloverdale, Indiana; West Bridgewater, Massachusetts; Cochise County, Arizona; Charlotte, North Carolina; Montgomery County, Texas; Lansing, Michigan; and Miami, Florida.

“The investigation will assess the technologies and methods used to monitor, assist and enforce the driver’s engagement with the dynamic driving task during Autopilot operation,” NHTSA said in its investigation documents.

In addition, the probe will cover object and event detection by the system, as well as where it is allowed to operate. NHTSA says it will examine “contributing circumstances” to the crashes, as well as similar crashes.

An investigation could lead to a recall or other enforcement action by NHTSA.

“NHTSA reminds the public that no commercially available motor vehicles today are capable of driving themselves,” the agency said in a statement. “Every available vehicle requires a human driver to be in control at all times, and all state laws hold human drivers responsible for operation of their vehicles.”

The agency said it has “robust enforcement tools” to protect the public and investigate potential safety issues, and it will act when it finds evidence “of noncompliance or an unreasonable risk to safety.”

In June, NHTSA ordered all automakers to report any crashes involving fully autonomous vehicles or partially automated driver assist systems.

Tesla uses a camera-based system, a lot of computing power, and sometimes radar to spot obstacles, determine what they are, and then decide what the vehicles should do. But Carnegie Mellon’s Rajkumar said the company’s radar was plagued by “false positive” signals and would stop cars after determining overpasses were obstacles.

Now Tesla has eliminated radar in favor of cameras and thousands of images that the computer neural network uses to determine if there are objects in the way. The system, he said, does a very good job on most objects that would be seen in the real world. But it has had trouble with parked emergency vehicles and perpendicular trucks in its path.

“It can only find patterns that it has been quote-unquote trained on,” Rajkumar said. “Clearly the inputs that the neural network was trained on just do not contain enough images. They’re only as good as the inputs and training. Almost by definition, the training will never be good enough.”

Tesla also is allowing selected owners to test what it calls a “full self-driving” system. Rajkumar said that should be investigated as well.


DETROIT (AP)

Impact

Impact of Technology in Politics: The Internet and Democracy

Published

 on

Impact of Technology in Politics

The potential effects of new information and communication technologies (ICTs) on democratic processes have resulted from the exponential growth of the Internet. Taking into consideration the complexities of democratic administration and the historical ramifications of the digital era, the debate’s scope concerning the impact of technology in politics is astonishing.

What is the Impact of Technology on Politics?

Let us portray a vivid image of what the future might look like, taking into consideration the present’s technological advancement in relation to democratic politics. In the past couple of years, the global political landscape has structured an illusional conceptualization that the only direction for technological development in politics is an upward one. Yet, one thing the Capitol riot proved to us is that this idea is extensively misleading. The relation between technological development and democratic politics is not always corroborative and valuable.

The impact of technology in politics is bringing new political manipulation for authoritarian figures to control the public’s opinion in certain aspects. The manipulation of information absorbed by the public, the endless monitorization of opponents to drive ulterior geopolitical motives, and the never-ending censoring of information have altered popular culture in its fusion with the political one.

At a time when technology can be used to breed constructive political change, politicians are abusing the endless offerings of innovation to guarantee political growth. Yet the main question remains, what is the impact of technology on politics? And does it generate any significant political change?

In that aspect, innovative technology and emerging means of communication indeed hold some kind of responsibility regarding how politicians abuse it. The fact that technology has become worldly accessible opens the way for its accessibility to whoever wants to use it for ulterior motives.

New strategies are emerging to deal with the intense technological adoption with various parties using and abusing the endless possibilities of the digital age. It was an inevitable age to rise, and with it came the inevitability of it being used for political motives and endeavors.

Just as there are people who think the impact of technology in politics is a positive one, others differ in opinion. The public’s view is contradictory in that aspect, with some believing that countries and authoritarian figures with access to advanced technologies are weaponized with the needed tools to influence citizens negatively. Internet access to the whole world has made it easier for political parties to impact the public’s perspective with falsified rumors and information. And this is mostly doable because of social media platforms such as Facebook, Instagram, TikTok, and more.

Importance of Technology

The continuous development of technology has had a remarkable impact on politicians’ triumphs, especially their role in influencing economic growth. Digital tools carry the knowledge to ease economic growth through innovative ways of production. While political candidates can use technology in different ways to influence the development of public opinion, social media platforms, specifically, can carry the whole weight of persuading people, which in its own form can increase the rating of political candidates.

Whether the world is willing to acknowledge this or not, the impact of technology in politics, specifically the internet, is deemed the most powerful tool for political races. By adopting technological means, politicians have the tools to fund their campaigns, obtain political scholars, and further promote them without paying for advertisement, as everything can happen via social media platforms.

One of technology’s most influential effects on politics is the financial aspect, as it helps federal candidates allocate funds during any election. The internet is an integral element in garnering funds through advertisement via technological shopping, as well as paving the way for applicants to gather proper donors to endorse numerous features within their operations.


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Impact space to stay informed and up-to-date with our daily articles.

Continue Reading

Ethical Tech

How Technology is Steering us Towards Digital Totalitarianism

Published

 on

Digital Totalitarianism

Social media, the internet, and other digital tools, which were once hailed as great forces for human empowerment, connectivity, and liberation, have quickly come to be seen as a serious threat to democratic stability and human freedom. Social media platforms are demonstrating the potential to exacerbate risks such as authoritarian privacy violations, partisan echo chambers, and the spread of harmful disinformation because they are based on a seriously flawed business model. A number of other developments in digital technology, most notably the advent of artificial intelligence (AI), are also benefiting authoritarian forces. These changes have the potential to lead to digital totalitarianism that is much easier to slide into than to climb out of.

Social Media and Big Data

In the increasingly data-driven world, technology is everywhere. Numerous shopping apps use your phone’s GPS to determine your location, giving merchants the opportunity to send you advertisements as soon as you pass by their storefront. Retailers can charge you exactly the most you’re willing to spend on a given product, thanks to personalized pricing. Even at home, your personal information is not secure: Digital assistants like Amazon Alexa save your search history, so they are aware of all of your preferences, including music, travel habits, and specific shopping histories.

Employers are tracking and monitoring their employees using the latest technology. Biometric timecards that scan an employee’s fingerprint, hand shape, retina, or iris are being used by an increasing number of businesses. Sensors that monitor door opening and closing, vehicle engine activity, and seatbelt clicks are installed in UPS trucks. Amazon is filing a patent for an electronic wristband that tracks hand motions, ensuring, for example, that a warehouse worker is constantly moving boxes.

With a bit of sci-fi imagination and a quick glance to the other side of the planet (cough – China), one can easily see how these technologies together form a slippery slope towards digital totalitarianism.

During the Hong Kong protests, the Chinese government used information from video surveillance, face and license plate identification, mobile device locations, and official records to identify targets for imprisonment in Xinjiang, according to Human Rights Watch’s Maya Wang. The study is the most recent in a series that has highlighted the extensive use of sophisticated monitoring, more conventional security measures, and political indoctrination camps in the area, which has acted as a proving ground for methods and innovations later used elsewhere.

Social Credit Systems and Digital Totalitarianism

China’s extreme tech programs that border on digital totalitarianism are notorious. The country’s “social credit system” will track citizens’ behavior by 2020, keeping track of everything from speeding tickets to social media posts that are critical of the government. Then, everyone will be given a special “sincerity score”; a high score will be necessary for anyone hoping to obtain the best housing, set up the fastest Internet speeds, enroll their children in the most prestigious institutions, and obtain the most lucrative employment opportunities.

The system was originally designed to undertake financial and social assessments for corporations, government institutions, people, and non-governmental groups while standardizing the credit rating function. It can, however quickly evolve into a precisely effective method of digital totalitarianism when it becomes equally as restrictive as it is handy.

Such a system doesn’t even need to be directly enforced to be an effective social control tool, as friends and family members would govern each other’s behaviors in fear of the repercussions spilling over onto them, shaming and shunning their fellow citizens for speaking against government entities for fear of catching the algorithm’s ire.

Control over Information Highways

The internet runs on vast and interconnected infrastructural networks that are managed by tech and telecoms companies under strict government supervision and

This infrastructure underpins the highway on which all our information travels. Increasingly, it goes beyond just Facebook messages and emails. Payment gateways, access to news and information, education, and a rising number of jobs and careers depend completely on the maintenance of communication infrastructure.

The digital repression taking place in Myanmar is one example of how authoritarian states can leverage their control over such communication highways to stifle resistance. Some may see it as a great tool for maintaining order and ensuring security, while others may see it as an unacceptable and oppressive method or digital totalitarianism that will not be used against the people until it is.

In addition to regular internet outages, the junta, a military or political force that seized forceful control of a nation, and blocked access to social media sites. On February 4, Facebook, which has more than 22 million users in Myanmar, or roughly 40% of the population, was blocked. Before Facebook was banned, anti-coup activists frequently used it to plan large-scale acts of civil disobedience, such as doctors refusing to work in military hospitals and staging fake car accidents and sit-ins on trains to cause traffic.

After Facebook was banned in the country, protesters moved to Twitter to organize their acts, which was also blocked the next day. Later, on February 9, the junta proposed a cybersecurity law that, in accordance with Human Rights Watch, would “give it sweeping powers to access user data, block websites, order internet shutdowns, and imprison critics and officials at non-complying companies.”

Predictive AI as a Tool for Digital Totalitarianism

In the U.S, a “predictive policing” initiative conducted by the New Orleans Police Department creates a hot list of probable criminal offenders using Big Data. Quiet Skies, a TSA-run comprehensive technology initiative, analyzes and flags travelers based on “suspect” behavioral patterns. The last person to board their aircraft, change clothes in the toilet, or simply look at their reflection in a terminal glass might have a traveler on the Quiet Skies list.

Using such technology, A city’s location and crime rate may now be predicted with up to 90% accuracy by artificial intelligence one week in advance. The researchers that developed this AI assert that it can also be used to uncover those prejudices. Similar systems have been seen to reinforce racist bias in police, and the same may be true in this instance, especially since this data can be used to specify individuals with the most likelihood of committing a crime.

This would undoubtedly sound like good news for a head of a city police department as the allocation of scarce resources and manpower would be better used if the police knew preemptively where their forces would be needed. However, it can also be quite concerning in the hands of malicious actors, at the beck and call of a state hell-bent on the use of digital totalitarianism to meet their ends by any means necessary.

In all the aforementioned cases, it is not the technology itself that is destructive or evil in any way, but the debate arises when we ask the question: Can any person or entity, public or private, be trusted with such power? If yes, then who and what mechanisms are there in place to mitigate damages should they go rogue.


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Technology and Ethical Tech sections to stay informed and up-to-date with our daily articles.

Continue Reading

Ethical Tech

Distorting Reality of Sexual Abuse in the Metaverse

Published

 on

Sexual Abuse in the Metaverse

As a virtual world, the Metaverse is bound to witness such inappropriate occurrences. Again, I need to highlight this one more time; the issue is not really the Metaverse here; it is more the people using it and the companies developing it and their inability to protect their users. Sexual abuse in the Metaverse cannot be fully attributed to the company creating it as much as the people using it. The blame for such condemned and inappropriately conducted conduct falls on the company developing the virtual world alongside its failure to create a safe ecosystem that shields women from the improper and vulgar behavior they were exposed to in the virtual space. It is not a secret that technology has facilitated sexual violence, as digital technology is now considered one the leading facilitators of not only virtual sexual harassment and abuse but also it is leading to face-to-face sexuality-based harm.

Technology has brought endless possibilities of the utmost freedom to act as they please, and digital technologies are the leading facilitators of such conduct. At the moment, and since its emergence, the tech industry and its unlimited offerings to the world have seen almost no supervision from the right parties. This lack of privacy laws, self-regulation, and transparency has led to disturbing cases of ethically intolerable and improper occurrences within the industry. From there, we can establish that while the problem is occurring in the industry itself, the issue is not from the industry but from how people use and manipulate the offering of technological innovations.

Technology-Facilitated Sexual Harassment

Digital technologies have facilitated a wide range of sexual harassment behaviors such as online sexual harassment, gender, and sexuality-based harassment, cyberstalking, exploitation from shared photos, and more – and we’re not covering the Metaverse sexual abuse. I am still merely generalizing the improper conduct resulting from the industry itself.

Mainly facilitated through social media platforms such as Instagram and TikTok. Messaging platforms such as WhatsApp and Facebook Messenger, as well as dating applications such as Tinder and Bumble, sexual abuse in the Metaverse has been a growing problem that is heavily affecting the internet and bringing fundamental technological and social challenges.

We Need to Talk About Sexual Harassment in the Metaverse

It seems that Meta’s virtual reality platform Horizon World has been the hub for sexual harassment, exposing women to various provocations of sexual abuse in the Metaverse. Women are reporting cases of sexual abuse and even assault in the parallel universe. Numerous users have expressed discontent with the company’s lack of attentiveness in safeguarding their experience in Horizon World.

In 2021, numerous reports of sexual abuse in the Metaverse emerged, adding another layer of discomfort for women on the internet. “Not only was I groped last night, but there were people there who supported this behavior which made me feel isolated in the Plaza,” one woman expressed to one news outlet.

Women’s presence on the internet has constantly been exposed to such behavior and encounters, and virtual reality is just adding another layer of unpleasantness to its female users. While companies are maintaining their focus on the design model of the universe, one thing is not being taken into consideration on this account: the psychological effect of being exposed to such behavior.

Online watchdogs are increasing their reports of Metaverse sexual abuse. The numbers are on an exponential rise, with some reporting being virtually raped on the platform after one hour of entering the universe while another avatar was watching.

The problem here can be divided into two segments, the behavioral analysis of the users and the model design of Meta’s Horizon World. Given that it is quite impossible to have any control over the users’ use of the platform and their ethical conduct in the world, Meta, on the other hand, has not succeeded in delivering a secure and protected space for its female users before releasing the VR platform to the public.

When a woman gets assaulted in the Metaverse, this leaves a deeply rooted psychological effect on the person exposed to it. When a user initiates unsolicited sexual conduct on a female user in the virtual world, the person’s brain cannot differentiate between what is real and virtual as virtual reality connects the subconscious brain to the physical world. This creates a vivid association between what is happening in the virtual world and the real world.


Inside Telecom provides you with an extensive list of content covering all aspects of the Tech industry. Keep an eye on our ethical tech and Metaverse section to stay informed and updated with our daily article. 

Continue Reading

Trending