fbpx
Connect with us

News

Hate speech in Myanmar continues to thrive on Facebook

Published

 on

Hate speech in Myanmar continues to thrive on Facebook

Years after coming under scrutiny for contributing to ethnic and religious violence in Myanmar, Facebook still has problems detecting and moderating hate speech and misinformation on its platform in the Southeast Asian nation, internal documents viewed by The Associated Press show.

Three years ago, the company commissioned a report that found Facebook was used to “foment division and incite offline violence” in the country. It pledged to do better and developed several tools and policies to deal with hate speech.

But the breaches have persisted — and even been exploited by hostile actors — since the Feb. 1 military takeover this year that resulted in gruesome human rights abuses across the country.

Scrolling through Facebook today, it’s not hard to find posts threatening murder and rape in Myanmar.

One 2 1/2 minute video posted on Oct. 24 of a supporter of the military calling for violence against opposition groups has garnered over 56,000 views.

“So starting from now, we are the god of death for all (of them),” the man says in Burmese while looking into the camera. “Come tomorrow and let’s see if you are real men or gays.”

One account posts the home address of a military defector and a photo of his wife. Another post from Oct. 29 includes a photo of soldiers leading bound and blindfolded men down a dirt path. The Burmese caption reads, “Don’t catch them alive.”

Despite the ongoing issues, Facebook saw its operations in Myanmar as both a model to export around the world and an evolving and caustic case. Documents reviewed by AP show Myanmar became a testing ground for new content moderation technology, with the social media giant trialing ways to automate the detection of hate speech and misinformation with varying levels of success.

Facebook’s internal discussions on Myanmar were revealed in disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by former Facebook employee-turned-whistleblower Frances Haugen’s legal counsel. The redacted versions received by Congress were obtained by a consortium of news organizations, including The Associated Press.

Facebook has had a shorter but more volatile history in Myanmar than in most countries. After decades of censorship under military rule, Myanmar was connected to the internet in 2000. Shortly afterward, Facebook paired with telecom providers in the country, allowing customers to use the platform without needing to pay for the data, which was still expensive at the time. Use of the platform exploded. For many in Myanmar, Facebook became the internet itself.

Htaike Htaike Aung, a Myanmar internet policy advocate, said it also became “a hotbed for extremism” around 2013, coinciding with religious riots across Myanmar between Buddhists and Muslims. It’s unclear how much, if any, content moderation was happening at the time by people or automation.

Htaike Htaike Aung said she met with Facebook that year and laid out issues, including how local organizations were seeing exponential amounts of hate speech on the platform and how its preventive mechanisms, such as reporting posts, didn’t work in the Myanmar context.

One example she cited was a photo of a pile of bamboo sticks that was posted with a caption reading, “Let us be prepared because there’s going to be a riot that is going to happen within the Muslim community.”

Htaike Htaike Aung said the photo was reported to Facebook, but the company didn’t take it down because it didn’t violate any of the company’s community standards.

“Which is ridiculous because it was actually calling for violence. But Facebook didn’t see it that way,” she said.

Years later, the lack of moderation caught the attention of the international community. In March 2018, United Nations human rights experts investigating attacks against Myanmar’s Muslim Rohingya minority said Facebook had played a role in spreading hate speech.

When asked about Myanmar a month later during a U.S. Senate hearing, CEO Mark Zuckerberg replied that Facebook planned to hire “dozens” of Burmese speakers to moderate content, would work with civil society groups to identify hate figures and develop new technologies to combat hate speech.

“Hate speech is very language specific. It’s hard to do it without people who speak the local language and we need to ramp up our effort there dramatically,” Zuckerberg said.

Information in internal Facebook documents show that while the company did step up efforts to combat hate speech in the country, the tools and strategies to do so never came to full fruition, and individuals within the company repeatedly sounded the alarm. In one document from May 2020, an employee said a hate speech text classifier that was available wasn’t being used or maintained. Another document from a month later said there were “significant gaps” in misinformation detection in Myanmar.

“Facebook took symbolic actions I think were designed to mollify policymakers that something was being done and didn’t need to look much deeper,” said Ronan Lee, a visiting scholar at Queen Mary University of London’s International State Crime Initiative.

In an emailed statement to the AP, Rafael Frankel’s, Facebook’s director of policy for APAC Emerging Countries, said the platform “has built a dedicated team of over 100 Burmese speakers.” He declined to state exactly how many were employed. Online marketing company NapoleonCat estimates there are about 28.7 million Facebook users in Myanmar.

During her testimony to the European Union Parliament on Nov. 8, Haugen, the whistleblower, criticized Facebook for a lack of investment in third-party fact-checking, and relying instead on automatic systems to detect harmful content.

“If you focus on these automatic systems, they will not work for the most ethnically diverse places in the world, with linguistically diverse places in the world, which are often the most fragile,” she said while referring to Myanmar.

After Zuckerberg’s 2018 congressional testimony, Facebook developed digital tools to combat hate speech and misinformation and also created a new internal framework to manage crises like Myanmar around the world.

Facebook crafted a list of “at-risk countries” with ranked tiers for a “critical countries team” to focus its energy on, and also rated languages needing more content moderation. Myanmar was listed as a “Tier 1” at-risk country, with Burmese deemed a “priority language” alongside Ethiopian languages, Bengali, Arabic and Urdu.

Facebook engineers taught Burmese slang words for “Muslims” and “Rohingya” to its automated systems. They also trained systems to detect “coordinated inauthentic behavior” such as a single person posting from multiple accounts, or coordination between different accounts to post the same content.

The company also tried “repeat offender demotion” which lessened the impact of posts of users who frequently violated guidelines. In a test in two of the world’s most volatile countries, demotion worked well in Ethiopia, but poorly in Myanmar — a difference that flummoxed engineers, according to a 2020 report included in the documents.

“We aren’t sure why … but this information provides a starting point for further analysis and user research,” the report said. Facebook declined to comment on the record if the problem has been fixed a year after its detection, or about the success of the two tools in Myanmar.

The company also deployed a new tool to reduce the virality of content called “reshare depth promotion” that boosts content shared by direct contacts, according to an internal 2020 report. This method is “content-agnostic” and cut viral inflammatory prevalence by 25% and photo misinformation by 48.5%, it said.

Slur detection and demotion were judged effective enough that staffers shared the experience in Myanmar as part of a “playbook” for acting in other at-risk countries such as Ethiopia, Syria, Yemen, Pakistan, India, Russia, the Philippines and Egypt.

While these new methods forged in Myanmar’s civil crises were deployed around the world, documents show that by June 2020 Facebook knew that flaws persisted in its Myanmar safety work.

“We found significant gaps in our coverage (especially in Myanmar and Ethiopia), showcasing that our current signals may be inadequate,” said an internal audit of the company’s “integrity coverage.” Myanmar was color-coded red with less than 55% coverage: worse than Syria but better than Ethiopia.

Haugen criticized the company’s internal policy of acting “only once a crisis has begun.”

Facebook “slows the platform down instead of watching as the temperature gets hotter, and making the platform safer as that happens,” she said during testimony to Britain’s Parliament on Oct. 25.

Frankel, the Facebook spokesperson, said the company has been proactive.

“Facebook’s approach in Myanmar today is fundamentally different from what it was in 2017, and allegations that we have not invested in safety and security in the country are wrong,” Frankel said.

Yet, a September 2021 report by the Myanmar Social Media Insights Project found that posts on Facebook include coordinated targeting of activists, ethnic minorities and journalists -– a tactic that has roots in the military’s history. The report also said the military is laundering its propaganda through public pages that claim to be media outlets.

Opposition and pro-military groups have used the encrypted messaging app Telegram to organize two types of propaganda campaigns on Facebook and Twitter, according to an October report shared with the AP by Myanmar Witness, a U.K.-based organization that archives social media posts related to the conflict.

Myanmar is a “highly contested information environment,” where users working in concert overload Facebook’s reporting system to take down others’ posts, and also spread coordinated misinformation and hate speech, the report said.

In one example, the coordinated networks took video shot in Mexico in 2018 by the Sinaloa cartel of butchered bodies and falsely labeled it as evidence of the opposition killing Myanmar soldiers on June 28, 2021, said Benjamin Strick, director of investigations for Myanmar Witness.

“There’s a difficulty in catching it for some of these platforms that are so big and perhaps the teams to look for it are so small that it’s very hard to catch water when it’s coming out of a fire hydrant,” he said.

The organization also traced the digital footprint of one soldier at the incineration of 160 homes in the village of Thantlang in late October. He posed in body armor on a ledge overlooking burning homes, with a post blaming opposition forces for the destruction in a litany of violent speech.

Facebook “conducted human rights due diligence to understand and address the risks in Myanmar,” and banned the military and used technology to reduce the amount of violating content, spokesperson Frankel said.

Yet Myanmar digital rights activists and scholars say Facebook could still take steps to improve, including greater openness about its policies for content moderation, demotion and removal, and acknowledging its responsibilities toward the Myanmar people.

“We need to start examining damage that has been done to our communities by platforms like Facebook. They portray that they are a virtual platform, and thus can have lower regulation,” said Lee, the visiting scholar. “The fact is that there are real-world consequences.”


JAKARTA, Indonesia (AP)

News

Volkswagen to Develop New Semiconductor with STMicro Amid Chip Crunch

Published

 on

Volkswagen to Develop New Semiconductor

Germany’s Volkswagen and Franco-Italian chipmaker STMicroelectronics will co-develop a new semiconductor amid a global microchip crunch that has strained the car industry’s supply chain, the companies said on Wednesday.

The move illustrates how Volkswagen, Europe’s biggest carmaker, is striving to gain greater control over the supply of chips, found in ever greater number in new generation and low-carbon emmission vehicles.

It is Volkswagen’s first direct relationship with second- and third-rank semiconductor suppliers, a move executives have hinted at since the chip shortage hit the auto industry in late 2019.

Volkswagen software unit Cariad said in May it would also source system-on-chips from Qualcomm for autonomous driving up to Level 4 standards, in which the car can handle all aspects of driving in most circumstances with no human intervention.

The new deal would not affect this partnership, a Cariad spokesperson said.

Neither party disclosed the financial implications of the deal, which makes STMicroelectronics one of Volkswagen’s top technological partners.

Cariad and STMicro are set to co-design the new chip, which will be part of the Stellar microcontroller family of semiconductors, the companies said in a statement.

Both companies are “moving to agree” that Taiwan Semiconductor Manufacturing Company (TSMC) will manufacture it, the statement said.

“With the planned direct cooperation with ST and TSMC, we are actively shaping our entire semiconductor supply chain,” said Murat Aksel, Volkswagen’s purchasing head.

“We’re ensuring the production of the exact chips we need for our cars and securing the supply of critical microchips for years to come.”

The global semiconductor shortage has left automakers worldwide unable to service record-full order books with unfinished vehicles clogging up warehouses for months and no clear end in sight.


PARIS/BERLIN (Reuters)

Continue Reading

News

Facebook’s Growth Woes in India: Too Much Nudity, Not Enough Women

Published

 on

Facebook's Growth

On Feb. 2, when Meta Platforms reported Facebook’s first-ever quarterly drop in daily users, its finance chief identified higher mobile data costs as a unique obstacle slowing growth in India, its biggest market.

On the same day, the U.S. tech group posted the findings of its own research into Facebook’s business in India on an internal employee forum. The study, conducted over the two years to the end of 2021, identified different problems.

Many women have shunned the male-dominated social network because they’re worried about their safety and privacy, according to the Meta research, which hasn’t been previously reported.

“Concerns about content safety and unwanted contact impede women’s FB use,” said the study, reviewed by Reuters, as it detailed the platform’s main challenges.

“Meta cannot succeed in India while leaving women behind.”

Other obstacles included nudity content, the perceived complexity of its app design, local language and literacy barriers and a lack of appeal among internet users seeking video content, according to the research, which was based on surveys of tens of thousands of people as well as internal user data.

Facebook’s growth began plateauing last year, when it added a few million users in the space of six months in the country of about 1.4 billion people, significantly lagging sister apps WhatsApp and Instagram, according to the report, which noted: “FB has grown more slowly than the internet and other apps.”

A Meta spokesperson, contacted about the study, said the company regularly invested in internal research to better understand the value its products provide and help identify ways to improve.

“But it’s misleading to characterize 7-month-old research as an accurate or comprehensive representation of the state of our business in India,” they added.

Nonetheless, the main Indian issues detailed in the research were not cited by Meta’s chief financial officer, Dave Wehner, on a Feb. 2 call with analysts to discuss results for the final quarter of 2021.

Wehner said Facebook’s user growth in Asia-Pacific and some other areas was hit by competition, plus comparison with prior quarters when COVID resurgences aided user engagement. He identified higher mobile data costs as a “unique” headwind for India.

Asked why the obstacles to growth identified by Wehner were different from those identified in the research, the spokesperson pointed to a Meta filing in April, during its first-quarter earnings, where it said Facebook users in India, Bangladesh and Vietnam represented the top three sources of growth in daily active users in March versus a year before.

Facebook’s fortunes in India have broad implications for Meta, which has lost about half of its value this year amid a broader tech sell-off and faces scrutiny from investors and analysts who fear its growth in potentially high-growth developing markets is starting to wane.

“India contains more FB users than any other country,” said the research, which pegged the number at almost 450 million as of November, after rapid growth over much of the past decade.

“Teams across the company should explicitly consider their strategic position and growth opportunities in India. Outcomes in India could drive global results.”

FAMILY DOESN’T ALLOW FB

The internal study, a “high-level overview of the growth trends” in India, was detailed in a presentation meant to help Facebook’s researchers and product teams. It said that a key problem Facebook had tried to fix for years in India, with limited success, was related to “gender imbalance”.

Men accounted for 75 percent of Facebook’s monthly active users in India last year. That compared with 62 percent of internet users more broadly in early 2020, the researchers found.

“While there is a gender imbalance in internet use across India, the imbalance among Facebook users is even more pronounced,” said the study, adding that online safety concerns and societal pressures were among reasons deterring women from the platform.

The researchers found that 79 percent of female Facebook users had “expressed concern about content/photo misuse”, while 20-30 percent of overall users were estimated to have seen nudity on the platform within the last seven days in the largely conservative country.

India ranked highest globally on the latter metric; around 10 percent of users surveyed in the United States and Brazil said they had seen nudity in the past week, for example, and under 20 percent in Indonesia, according to a survey conducted in August 2021.

“Negative content is more prevalent in India than other countries,” said the internal report.

Family disapproval – “Family doesn’t allow FB” – was a major reason cited by women for not using Facebook, the study found.

The Meta spokesperson said the online gender imbalance was an industry-wide problem and not specific to its platforms.

They said that since 2016, Meta had quadrupled the size of the global team working on safety and security to over 40,000, and that between January and April this year, more than 97 percent of adult nudity and sexual activity content was removed before someone reported it.

WHERE DO YOU LIVE?

Depicting struggles of women users, one research slide showed a picture of an Indian woman walking on a street wearing a saree with which she covered her head and face, a tradition common in many parts of India.

Next to this image was the account of a woman who said she had received 367 friend requests from strangers, with a string of comments on photos like “very beautiful”, “where do you live”, “you look good”.

The comments stopped after she used the “locked profile” feature, according to the woman cited, referring to an option Facebook introduced in 2020 in India allowing users to restrict viewing of pictures and posts to non-friends.

By June 2021, the feature had been adopted by 34 percent of women users in India, said the internal report, but more work was needed, with “bold product changes”, to address the problem of low uptake of Facebook among women.

Facebook has faced criticism globally from online safety campaigners for not doing enough to safeguard women from bullying or harassment. In 2019, the platform said it had a team of people focusing “just on making sure we are keeping women safe”, using technology tools to remove content deemed unsafe.

The Meta spokesperson said it had launched a Women’s Safety Hub and other privacy features such as a profanity filter to help female users in India stay safe online. Since 2021, more than 45 percent of Facebook Groups in India related to entrepreneurship have been created by women, Meta added.

WHATSAPP GRABS CROWN

Facebook’s growth in India began to level off last year, according to the internal research. The platform’s main appeal has been to connect with friends and family, but non-Facebook users were primarily now using the internet to see pictures and videos, the research noted.

Its annualised growth rate based on May-October 2021 showed it was adding just 6.6 million users per year, versus WhatsApp’s 71 million and Instagram’s 128 million, according to one internal slide that illustrated the slowdown graphically.

By November, Facebook’s user base in India was 447 million strong, lagging its Meta sister apps. WhatsApp – which Facebook acquired in 2014 – had 563 million Indian users. Instagram, bought in 2012, had 309 million.

The slowdown stands in contrast to Facebook’s strong expansion in past years. In 2014, the platform had fewer than 100 million users in India, a number that doubled by 2017, the research said.

The Meta spokesperson declined to comment on the user numbers, saying it didn’t disclose country-specific data. They said the company was “definitely increasing the prominence of video” on Facebook.

Lower-educated users are another group that is underrepresented on Facebook, according to the research. The platform faced challenges in meeting demand for content in India’s many local languages, while many people cited the app’s complexity and lack of tutorials as deterrents.

Between 2017 and 2020, India’s monthly online users as a share of the population doubled, boosted by cheaper data plans, but the share of internet users who reported they used Facebook declined during that period, the study found.

“India is now the country with more Facebook, WhatsApp, and Instagram accounts than any other country in the world,” said an internal post accompanying the report. “But continued growth in India faces many challenges.”


NEW DELHI (Reuters)

Continue Reading

News

Exclusive-U.S. Probes China’s Huawei over Equipment Near Missile Silos

Published

 on

The Biden administration is investigating Chinese telecoms equipment maker Huawei over concerns that U.S. cell towers fitted with its gear could capture sensitive information from military bases and missile silos that the company could then transmit to China, two people familiar with the matter said.

Authorities are concerned Huawei could obtain sensitive data on military drills and the readiness status of bases and personnel via the equipment, one of the people said, requesting anonymity because the investigation is confidential and involves national security.

The previously unreported probe was opened by the Commerce Department shortly after Joe Biden took office early last year, the sources said, following the implementation of rules to flesh out a May 2019 executive order that gave the agency the investigative authority.

The agency subpoenaed Huawei in April 2021 to learn the company’s policy on sharing data with foreign parties that its equipment could capture from cell phones, including messages and geolocational data, according to the 10-page document seen by Reuters.

The Commerce Department said it could not “confirm or deny ongoing investigations.” It added that: “protecting U.S. persons’ safety and security against malign information collection is vital to protecting our economy and national security.”

Huawei did not respond to a request for comment. The company has strongly denied U.S. government allegations that it could spy on U.S. customers and poses a national security threat.

The Chinese embassy in Washington did not respond to the specific allegations. In an emailed statement, it said: “The U.S. government abuses the concept of national security and state power to go all out to suppress Huawei and other Chinese telecommunications companies without providing any solid proof that they constitute a security threat to the U.S. and other countries.”

Reuters could not determine what actions the agency might take against Huawei.

Eight current and former U.S. government officials said the probe reflects lingering national security concerns about the company, which was already hit with a slew of U.S. restrictions in recent years.

For a timeline on the U.S. government’s trade restrictions on Huawei please click.

If the Commerce Department determines Huawei poses a national security threat, it could go beyond existing restrictions imposed by the Federal Communications Commission (FCC), the U.S. telecoms regulator.

Using broad new powers created by the Trump administration, the agency could ban all U.S. transactions with Huawei, demanding U.S. telecoms carriers that still rely on its gear quickly remove it, or face fines or other penalties, a number of lawyers, academics and former officials interviewed by Reuters said.

The FCC declined to comment.

U.S.-CHINA TECH WAR

Huawei has long been dogged by U.S. government allegations it could spy on U.S. customers, though authorities in Washington have made little evidence public. The company denies the allegations.

“If Chinese companies like Huawei are given unfettered access to our telecommunications infrastructure, they could collect any of your information that traverses their devices or networks,” FBI Director Christopher Wray warned in a speech in 2020. “Worse still: They’d have no choice but to hand it over to the Chinese government, if asked.”

Reuters could not determine if Huawei’s equipment is capable of collecting that sort of sensitive information and providing it to China.

“If you can stick a receiver on a (cellphone) tower, you can collect signals and that means you can get intelligence. No intelligence agency would pass an opportunity like that,” said Jim Lewis, a technology and cybersecurity expert at the Center for Strategic and International Studies (CSIS), a Washington D.C.-based think tank.

One move to address the perceived threat was a 2019 law and related rules forbidding U.S. companies from using federal subsidies to buy telecoms equipment from Huawei. It also tasked the FCC with compelling U.S. carriers that receive federal subsidies to purge their networks of Huawei equipment, in return for reimbursement.

TOWERS NEAR MISSILE SILOS

Cell towers equipped with Huawei gear that are close to sensitive military and intelligence sites have become a particular concern for U.S. authorities, according to the two sources and an FCC commissioner.

Brendan Carr, one of the FCC’s five commissioners, said that cellphone towers around Montana’s Malmstrom Air Force Base – one of three that oversee missile fields in the United States – ran on Huawei technology.

In an interview this week, he told Reuters there was a risk that data from smartphones obtained by Huawei could reveal troop movements near the sites: “There’s a very real concern that some of that technology could be used as an early warning system if there happened to be, God forbid, an ICBM missile strike.”

Reuters was unable to determine the exact location or scope of Huawei equipment operating near military facilities. Individuals interviewed by Reuters pointed to at least two other likely cases in Nebraska and Wyoming.

Crystal Rhoades, a commissioner at Nebraska’s telecoms regulator, has flagged to media the risk posed by the proximity of cell towers owned by Viaero to intercontinental ballistic missile (ICBM) silos in the western part of the state.

ICBMs carry nuclear warheads to targets thousands of miles away and are stored in underground silos near military bases. The Nebraska cell towers are near a missile field overseen by F.E. Warren Air Force Base in neighboring Wyoming.

Viaero provides mobile telephone and wireless broadband services to about 110,000 customers in the region. It said in a 2018 filing to the FCC opposing the commission’s efforts at curbing Huawei’s expansion that approximately 80 percent of its equipment was manufactured by the Chinese firm.

That gear could potentially enable Huawei to glean sensitive information about the sites, Rhoades told Reuters in June.

“An enemy state could potentially see when things are online, when things are offline, the level of security, how many people are on duty in any given building where there are really dangerous and sophisticated weapons,” Rhoades said.

Rhoades said in July that she had not been updated on rip and replace efforts by Viaero in more than two years, despite requesting updated information from the company in recent weeks.

At the time of last contact, the company said it would not begin removal efforts until the FCC money became available.

The FCC advised companies on Monday how much of their funding requests it can reimburse.

Viaero did not respond to multiple requests for comment. Huawei also declined to comment.

In Wyoming, then CEO of rural carrier Union Wireless, John Woody, said in a 2018 interview with Reuters that the company’s coverage area included ICBM silos near the F.E. Warren Air Force Base and that its equipment included Huawei switches, routers and cell sites.

Last month, Eric Woody, John’s son and acting CEO, said “virtually all the Huawei gear Union purchased remains in our network.” He declined to say whether the towers close to the sensitive military sites contain Huawei equipment.

F.E. Warren Air Force Base referred comment on the Huawei equipment to the Pentagon. The United States Strategic Command, which is responsible for nuclear operations, said in a statement to Reuters: “We maintain constant awareness of activities near our installations and sites.” It noted that “any concerns are on a whole of government level” but declined to provide further details on what those concerns are.

NEW POWERS AGAINST FOREIGN ADVERSARIES

Rick Sofield, a former DOJ official in the national security division who reviewed telecoms transactions, said the Commerce Department probe could give additional bite to the FCC’s crackdown but there was nothing new in targeting Huawei.

“The U.S. government’s concerns regarding Huawei are widely known so any information or communications technology company that continues to use Huawei products is assuming the risk that the U.S. government will come knocking,” said Sofield, who represents U.S. and foreign companies facing U.S. national security reviews. He said he has not worked for Huawei.

The Commerce Department is using authority granted in 2019 that allows it to ban or restrict transactions between U.S. firms and internet, telecom and tech companies from “foreign adversary” nations including Russia and China, according to the executive order and related rules.

The two sources familiar with the Huawei investigation and a former government official said Huawei was one of the Biden administration’s first cases using the new powers, referred to Commerce in early 2021 by the Justice Department.

The Justice Department referred requests for comment by Reuters to Commerce.

The subpoena is dated April 13, 2021, the same day that Commerce announced a document request was sent to an unnamed Chinese company under the new powers.

It gives Huawei 30 days to provide seven years’ worth of “records identifying Huawei’s business transactions and relationships with foreign entities located outside of the United States, including foreign government agencies or parties, that have access to, or that share in any capacity, U.S. user data collected by Huawei.”

Noting that the “focus of this investigation is the provisioning of mobile network and telecommunications equipment…by Huawei in the United States,” it also asks Huawei for a complete catalog of “all types of equipment sold” to “any communications provider in the United States,” including names and locations of the parties to the sale.


WASHINGTON (Reuters)

Continue Reading

Trending