Paper instructions:

from this Question

1. When Algorithms Decide What You Pay YOU MAY NOT REALIZE IT, but every website you visit is created, literally, the moment you arrive. Each element of the page — the pictures, the ads, the text, the comments — live on computers in different places and are sent to your device when you request them. That means that it’s easy for companies to create different web pages for different people. Sometimes that customization is helpful, such as when you see search results for restaurants near you. Sometimes it can be creepy, such as when ads follow you around from website to website. And sometimes customization can cost you money, research has shown. Orbitz showed higher- priced hotels to owners of Mac computers, for instance. Staples offered the same products at higher prices to people living in certain ZIP codes. Last year, we found that The Princeton Review was charging different prices for its online SAT tutoring course in different ZIP codes. In some ZIP codes, the course cost $6,600; in others that same course was offered for as much as $8,400. Charging different prices to different geographic regions is regulated in Europe, but is not in the United States. In this case, it resulted in inadvertent discrimination. Our analysis found that Asians were nearly twice as likely to get that higher price from The Princeton Review than non- Asians. Asians make up 4.9 percent of the U.S. population overall, but they accounted for more than 8 percent of the population in areas where The Princeton Review was charging higher prices for its SAT prep packages. Consider the difference between two ZIP codes with similar incomes in Texas. In Houston’s ZIP code 77072, with a relatively large Asian population, the Princeton Review course was offered for $7,200. While in Dallas’ ZIP code 75203, with almost no Asians, the course was offered for $6,600. And in heavily Asian, low-income Queens ZIP code 11355, the course was offered for $8,400. Princeton Review told us the pricing differences reflected the varying costs of running its business and did not reflect any discrimination on their part. But that’s the thing with algorithms — they can discriminate unintentionally. And as we enter a world of mass customization, we need to be on the lookout for this kind of discrimination. 2. Amazon Says It Puts Customers First. But Its Pricing Algorithm Doesn’t Amazon bills itself as “Earth’s most customer-centric company.” Yet its algorithm is hiding the best deal from many customers. One day recently, we visited Amazon’s website in search of the best deal on Loctite super glue, the essential home repair tool for fixing everything from broken eyeglass frames to shattered ceramics. In an instant, Amazon’s software sifted through dozens of combinations of price and shipping, some of which were cheaper than what one might find at a local store. The, an online retailer from Farmers Branch, Texas, with a 95 percent customer satisfaction rating, was selling Loctite for $6.75 with free shipping. Fat Boy Tools of Massillon, Ohio, a competitor with a similar customer rating was nearly as cheap: $7.27 with free shipping. The computer program brushed aside those offers, instead selecting the vial of glue sold by Amazon itself for slightly more, $7.80. This seemed like a plausible choice until another click of the mouse revealed shipping costs of $6.51. That brought the total cost, before taxes, to $14.31, or nearly double the price Amazon had listed on the initial page. What kind of sophisticated shopping algorithm steers customers to a product that costs so much more than seemingly comparable alternatives? One that substantially favors Amazon and sellers it charges for services, an examination by ProPublica found. Amazon often says it seeks to be “Earth’s most customer-centric company.” Jeffrey P. Bezos, its founder and CEO, has been known to put an empty chair in meetings to remind employees of the need to focus on the customer. But in fact, the company appears to be using its market power and proprietary algorithm to advantage itself at the expense of sellers and many customers. Unseen and almost wholly unregulated, algorithms play an increasingly important role in broad swaths of American life. They figure in decisions large and small, from whether a person qualifies for a mortgage to the sentence someone convicted of a crime might serve. The weightings and variables that underlie these equations are often closely guarded secrets known only to people at the companies that design and use them. But while the math is hidden from public view, the effects of algorithms can be vast. With more than 300 million active customer accounts and more than $100 billion in annual revenue, Amazon is a shopping giant whose algorithm can make or break other retailers. And so ProPublica set out to see how Amazon’s software was shaping the marketplace. We looked at 250 frequently purchased products over several weeks to see which ones were selected for the most prominent placement on Amazon’s virtual shelves — the so-called “buy box” that pops up first as a suggested purchase. About three-quarters of the time, Amazon placed its own products and those of companies that pay for services in that position even when there were substantially cheaper offers available from others. That turns out to be an important edge. Most Amazon shoppers end up clicking “add to cart” for the offer highlighted in the buy box. “It’s the most valuable small button on the Internet today,” said Shmuli Goldberg, an Israeli technologist who has extensively studied Amazon’s algorithm. Amazon does give customers a chance to comparison shop, with a listing that ranks all vendors of the same item by “price + shipping.” It appears to be the epitome of Amazon’s customer- centric approach. But there, too, the company gives itself an oft-decisive advantage. Its rankings omit shipping costs only for its own products and those sold by companies that pay Amazon for its services. We found that the practice earned Amazon-linked products higher rankings in more than 80 percent of cases. Amazon’s offer of the Loctite glue, a respectable No. 5 on the comparison list, dropped to the 39th best deal when shipping was included. (The prices Amazon shows are ranked correctly for those who pay $99 per year for Amazon’s Prime shipping service and for those who are buying $49 or more in eligible items.) Erik Fairleigh, a spokesman for Amazon, said the algorithm that selects which product goes into the “buy box” accounts for a range of factors beyond price. “Customers trust Amazon to have great prices, but that’s not all— vast selection, world-class customer service and fast, free delivery are critically important,” he said in an e-mailed statement. “These components, and more, determine our product listings.” (Read Amazon’s original statement and the statement Amazon sent after this story was published.) Even when Amazon offers products from different vendors, only one seller’s item is presented in the “buy box.” And it’s not always the best deal. Fairleigh declined to answer detailed questions, including questions about why Amazon’s product rankings excluded shipping costs only for itself and its paid partners. The decision to allow non-Amazon companies to sell products on the site was controversial within the company. But Bezos pushed ahead, saying he was willing to lose sales if it made his company more competitive in the long run. “If we side with the consumer on that kind of decision,” he said at the 2007 annual shareholder meeting, “over time it will force the right kind of behaviors on ourself.” At that meeting, a shareholder asked about Amazon’s practice of promoting products sold by other companies on its website. Bezos replied that the company had “very objective customer- centered algorithms that automatically award the “buy box” to the lowest price seller, provided “they actually have it in stock and can deliver it.” It is not clear why Amazon’s algorithm now pushes its own products ahead of better deals offered by others. Perhaps Amazon is taking the view that its widely admired shipping and delivery offers the best possible satisfaction for customers, even if it costs more. Another possibility is that the company is trying to encourage shoppers to join the Prime program, which offers free shipping on many items (including the Loctite super glue). When non-Prime customers initially view Amazon products, they are offered “FREE Shipping on eligible orders.” When they reach the final page on checkout, the shipping fees are revealed along with an advertisement to avoid such fees by joining Prime. The costs of simply buying the algorithm-selected choice can add up. The average price difference between what the program recommended and the truly cheapest price was $7.88 for the 250 products we tested. An Amazon customer who bought all the products on our list from the buy box would have paid nearly 20 percent more — or about $1,400 extra — than if they had bought the cheapest items being offered by other vendors. Amazon’s algorithm also takes a toll on outside companies hoping to sell products on the website. To increase their chances of winning the buy box, many sellers are paying Amazon to warehouse and ship their products through a program called “Fulfilled by Amazon.” The fees for the program, which vary by size and weight of the items being shipped, can amount to 10 to 20 percent of sales. Amazon gives itself an edge by not including the price of shipping on its own products. To get the same benefit, other sellers have to pay Amazon. Paying Amazon appears to be a sound strategy. Fulfilled by Amazon vendors and Amazon itself were just about the only sellers – 94 percent of the cases we analyzed — that ever won the buy box without having the cheapest product. Through its rankings and algorithm, Amazon is quietly reshaping online commerce almost as dramatically as it reshaped offline commerce when it burst onto the scene more than 20 years ago. Just as the company’s cheap prices and fast shipping caused a seismic shift in retailing that shuttered stores selling books, electronics and music, now Amazon’s pay-to-play culture is forcing online sellers to choose between paying hefty fees or leaving the platform altogether. Consider Barebones WorkWear, a Sacramento clothing retailer that has been selling on Amazon since 2004. This year, the company removed nearly all of its items from Amazon and shuttered a warehouse and call center that were devoted to Amazon sales. “Competition between us and Amazon is just insurmountable,” Barebones chief operating officer Mason Moore said. The profit margins for most clothing items were too low, he said, to allow for the company to sell through the Fulfilled by Amazon, or FBA, program. But, he said, “FBA is really the only avenue that we see as any feasible way to do business with Amazon.” This week, BareBones has just five items listed on Amazon — all of them fulfilled by Amazon. Last Christmas, so many vendors joined Fulfilled by Amazon that the company ran out of space in some of its warehouses. This year, the company has doubled its number of warehouses. In July, Amazon reported record profits and the company’s chief financial officer Brian Olsavsky told investors that Fulfilled by Amazon growth was “really strong.” Tech companies’ practice of favoring their own listings has occasionally earned regulators’ scrutiny. The European Commission, for example, has accused Google of violating EU antitrust rules by favoring its own shopping service over those of other vendors. Amazon didn’t start as an open marketplace for online sellers. When it opened its virtual doors in 1995, Amazon sold only its own products. It began letting other vendors onto its product listings in 2000. “Our judgment was simple, ” Bezos wrote of that decision, in a 2005 letter to shareholders. “If a third party could offer a better price or better availability on a particular item, then we wanted our customer to get easy access to that offer.” For merchants, listing their wares on Amazon was a great opportunity. Amazon attracted legions of customers, well worth the 6 to 25 percent commission merchants paid the online colossus on each sale. By 2007, more than 1 million third-party sellers had joined Amazon. Collectively, they generated about 30 percent of unit sales, the company said at the time. One of the sellers Amazon attracted was Kate Erkavun. She had recently received a master’s degree in industrial engineering, but also sold cosmetics on eBay out of her apartment while her husband worked at an engineering company. “I was young, I had just graduated,” she said. “I worked in a pharmaceutical company and I quit after seven months. I realized it wasn’t for me.” She started selling makeup on eBay, and then migrated to Amazon, too. At first, Erkavun’s listings on Amazon were similar to her eBay listings – one page for each product that customers could page through at leisure. But over time, Amazon simplified the design so that customers would only be presented with one default vendor for each product. Amazon’s algorithm would choose which seller would win that default position the buy box While the exact formulas used to pick the winner were secret, Amazon’s website advises sellers that they can increase their chances by having low prices, having items in stock, offering free shipping and getting excellent customer service ratings. To optimize their chances, many sellers starting using algorithmic software to constantly change prices to adapt to competitors’ moves. Soon, Amazon became a highly dynamic marketplace, similar to a stock-trading floor, where prices for products changed as often as every 15 minutes. Erkavun and her husband, Gokhan, were determined to increase their chances of sales. Gokhan. who quit his job to help run the business, worked with a programmer to write software that repriced their products throughout the day. And the couple bought a building in Nutley, New Jersey, to store their approximately 10,000 shampoos, lipsticks, lotions and other cosmetics in a temperature-controlled warehouse. They offered free shipping, quick turnaround times and worked hard to keep their customer service ratings high. At first, their techniques seemed to be successful. In 2010, Amazon sales were half of their revenue and profits were at a record high. “We were very happy,” said Gokhan. But in 2011, he said, Beauty Bridge’s sales started slipping as Amazon entered the cosmetics business and began consistently winning the buy box. “If you don’t win the buy box, your chance of selling is low,” Kate said. Sellers who don’t win the buy box are placed on a page called “More Buying Choices,” on a list that Amazon describes as ranked by price plus shipping. However, since Amazon doesn’t include the cost of shipping for itself and its fulfillment partners, the rankings on that page can be misleading One day recently, for instance, Amazon was listed as the top-ranked seller – both in the buy box and at the top of the buying choices page — for a self-tanning lotion called Vita Liberata, Beauty Bridge was offering the lowest price at $27.03, but Amazon had won the buy box with an offer of $29.98. When a customer put the lotion from Amazon in her cart, the added shipping cost brought the total to $35.46. Beauty Bridge was offering free shipping – so with or without shipping, its offer should have been listed higher than Amazon’s. But it was not. When Gokhan Erkayun was told of his lotion’s poor ranking despite its cheaper price, he just sighed and said, “Amazon is not really fair in terms of competition, but we don’t have much choice. We have to be there.” On its Canadian website, Amazon discloses that its own items are ranked without shipping price. But in the United States, Amazon’s website states that the default sort order of the offer listing is ascending Price + Shipping.” Of course, most Amazon customers never make it to the More Buying Choices page where Beauty Bridge’s listing was ranked poorly. Among the countless consultants and conferences devoted to winning the buy box, it’s well known that Amazon’s algorithm gives an advantage to itself, and to sellers who pay to join the Fulfilled by Amazon program. “Amazon definitely does weight things in favor of the FBA seller,” said Michael Butcher, senior account manager at SellerEngine Software, which sells algorithmic pricing software for Amazon sellers. “It does seem unfair and it is sometimes hard for merchants.” For a few years, Amazon even advertised the advantage it offered to its paid partners. According to Web pages stored by the Internet Archive, the Amazon website said: “Because most FBA listings are ranked without a shipping cost, you get an edge when competing!” The language remained on the page from February 2013 through December 2015. This year, the language has been changed to: “As you grow your competitive edge, you can increase your chance of winning the Buy Box.” But Beauty Bridge’s Kate and Gokhan Erkavun didn’t want to pay the fees to join the program. They had their own warehouse and didn’t need Amazon’s. And they estimated the program would cost them at least an additional 15 percent of sales. They held out until 2014. By then, sales had slid 30 percent from the peak in 2010. In 2014, “we got to a point where we couldn’t survive without doing FBA,” Gokhan said. Since joining the “Fulfilled by Amazon” program, Gokhan says the company’s sales have recovered, but profits have not because of the fees. Gokhan is now hoping that Wal-Mart’s recent purchase of online shopping website will increase the pressure on Amazon to give small online retailers a better deal. “We need Wal-Mart to really get serious about competing with Amazon,” he said. “Otherwise in 10 years, we aren’t going to have many retailers left.” 3. What Facebook Knows About You WE LIVE IN AN ERA of increasing automation Machines help us not only with manual labor but also with intellectual tasks, such as curating the news we read and calculating the best driving directions. But as machines make more decisions for us, it is increasingly important to understand the algorithms that produce their judgments. We’ve spent the year investigating algorithms, from how they’ve been used to predict future criminals to Amazon’s use of them to advantage itself over competitors. All too often, these algorithms are a black box: It’s impossible for outsiders to know what’s going inside them. Today we’re launching a series of experiments to help give you the power to see inside. Our first stop: Facebook and your personal data. Facebook has a particularly comprehensive set of dossiers on its more than 2 billion members. Every time a Facebook member likes a post, tags a photo, updates their favorite movies in their profile, posts a comment about a politician, or changes their relationship status, Facebook logs it. When they browse the Web, Facebook collects information about pages they visit that contain Facebook sharing buttons. When they use Instagram or WhatsApp on their phone, which are both owned by Facebook, they contribute more data to Facebook’s dossier. And in case that wasn’t enough, Facebook also buys data about its users’ mortgages, car ownership and shopping habits from some of the biggest commercial data brokers. Facebook uses all this data to offer marketers a chance to target ads to increasingly specific groups of people. Indeed, we found Facebook offers advertisers more than 1,300 categories for ad targeting – everything from people whose property size is less than 26 acres to households with exactly seven credit cards. We built a tool that works with the Chrome Web browser that lets you see what Facebook says it knows about you — you can rate the data for accuracy and you can send it to us, if you like. We will, of course, protect your privacy. We won’t collect any identifying details about you. And we won’t share your personal data with anyone. This is the same information that Facebook itself offers users — buried deep in its site. (It’s in a section of its settings called “Ad Preferences.”) It’s not clear if this data represents all that Facebook knows about a person. For instance, we haven’t yet seen anyone with credit card or property ownership listed. Which is why we’re particularly interested in hearing what you found out. You can help us examine whether what Facebook says it knows matches up with the categories it sells. Also, as part of a collaboration with WNYC’s Note to Self podcast, we’re asking people to tell us how they feel about what Facebook knows about them. To join that experiment, sign up and we’ll email you with the results of our very-unscientific audit of Facebook’s personal dossiers. Thanks for your help! 4. Facebook Doesn’t Tell Users Everything It Really Knows About Them The site shows users how Facebook categorizes them. It doesn’t reveal the data it is buying about their offline lives. Facebook has long let users see all sorts of things the site knows about them, like whether they enjoy soccer, have recently moved, or like Melania Trump. But the tech giant gives users little indication that it buys far more sensitive data about them, including their income, the types of restaurants they frequent and even how many credit cards are in their wallets. Since September, ProPublica has been encouraging Facebook users to share the categories of interest that the site has assigned to them. Users showed us everything from “Pretending to Text in Awkward Situations” to “Breastfeeding in Public.” In total, we collected more than 52,000 unique attributes that Facebook has used to classify users. Facebook’s page explaining “what influences the ads you see” says the company gets the information about its users from a few different sources.” What the page doesn’t say is that those sources include detailed dossiers obtained from commercial data brokers about users’ offline lives. Nor does Facebook show users any of the often remarkably detailed information it gets from those brokers. “They are not being honest,” said Jeffrey Chester, executive director of the Center for Digital Democracy. “Facebook is bundling a dozen different data companies to target an individual customer, and an individual should have access to that bundle as well.” When asked this week about the lack of disclosure, Facebook responded that users can discern the use of third-party data if they know where to look. Each time an ad appears using such data, Facebook says, users can click a button on the ad revealing that fact. Users can still not see what specific information about their lives is being used. The company said it does not disclose the use of third-party data on its general page about ad targeting because the data is widely available and was not collected by Facebook. “Our approach to controls for third-party categories is somewhat different than our approach for Facebook-specific categories,” said Steve Satterfield, a Facebook manager of privacy and public policy. “This is because the data providers we work with generally make their categories available across many different ad platforms, not just on Facebook.” Satterfield said users who don’t want that information to be available to Facebook should contact the data brokers directly. He said users can visit a page in Facebook’s help center, which provides links to the opt-outs for six data brokers that sell personal data to Facebook Limiting commercial data brokers’ distribution of your personal information is no simple matter. For instance, opting out of Oracle’s Datalogix, which provides about 350 types of data to Facebook according to our analysis, requires “sending a written request, along with a copy of government-issued identification” in postal mail to Oracle’s chief privacy officer. Users can ask data brokers to show them the information stored about them. But that can also be complicated. One Facebook broker, Acxiom, requires people to send the last four digits of their social security number to obtain their data. Facebook changes its providers from time to time so members would have to regularly visit the help center page to protect their privacy. One of us actually tried to do what Facebook suggests. While writing a book about privacy in 2013, reporter Julia Angwin tried to opt out from as many data brokers as she could. Of the 92 brokers she identified that accepted opt-outs, 65 of them required her to submit a form of identification such as a driver’s license. In the end, she could not remove her data from the majority of providers. ProPublica’s experiment to gather Facebook’s ad categories from readers was part of our Black Box series, which explores the power of algorithms in our lives. Facebook uses algorithms not only to determine the news and advertisements that it displays to users, but also to categorize its users in tens of thousands of micro-targetable groups. Our crowd-sourced data showed us that Facebook’s categories range from innocuous groupings of people who like southern food to sensitive categories such as “Ethnic Affinity” which categorizes people based on their affinity for African-Americans, Hispanics and other ethnic groups. Advertisers can target ads toward a group — or exclude ads from being shown to a particular group. Last month, after ProPublica bought a Facebook ad in its housing categories that excluded African-Americans, Hispanics and Asian-Americans, the company said it would build an automated system to help it spot ads that illegally discriminate. Facebook has been working with data brokers since 2012 when it signed a deal with Datalogix. This prompted Chester, the privacy advocate at the Center for Digital Democracy, to file a complaint with the Federal Trade Commission alleging that Facebook had violated a consent decree with the agency on privacy issues. The FTC has never publicly responded to that complaint and Facebook subsequently signed deals with five other data brokers. To find out exactly what type of data Facebook buys from brokers, we downloaded a list of 29,000 categories that the site provides to ad buyers. Nearly 600 of the categories were described as being provided by third-party data brokers. (Most categories were described as being generated by clicking pages or ads on Facebook.) The categories from commercial data brokers were largely financial, such as “total liquid investible assets $1-$24,999,” “People in households that have an estimated household income of between $100K and $125K,” or even “Individuals that are frequent transactor at lower cost department or dollar stores.” We compared the data broker categories with the crowd-sourced list of what Facebook tells users about themselves. We found none of the data broker information on any of the tens of the thousands of interests” that Facebook showed users. Our tool also allowed users to react to the categories they were placed in as being “wrong,” “creepy” or “spot on.” The category that received the most votes for “wrong” was “Farmville slots.” The category that got the most votes for “creepy” was “Away from family.” And the category that was rated most “spot on” was “NPR.” Clarification, Jan. 4, 2017: We’ve added details about what Facebook tells users regarding third-party data. Specifically, each time an ad appears using such information, Facebook says, users can click a button on the ad revealing the use of third-party data. 5. When Machines Learn by Experimenting on Us AS WE ENTER the era of artificial intelligence, machines are constantly trying to predict human behavior. Google predicts traffic patterns based on motion sensors in our phones. Spotify anticipates the music we might want to listen to. Amazon guesses what books we want to read next. Machines learn to make these predictions by analyzing patterns in huge amounts of data. Some patterns that machines find can be nonsensical, such as analyses that have found that divorce rates in Maine go down when margarine consumption decreases. But other patterns can be extremely useful: For instance, Google uses machine learning to understand how to optimize energy use at its data centers. Depending on what data they are trained on, machines can “learn” to be biased. That’s what happened in the fall of 2012, when Google’s machines “learned” in the run-up to the presidential election that people who searched for President Obama wanted more Obama news in subsequent searches, but people who searched for Republican nominee Mitt Romney did not. Google said the bias in its search results was an inadvertent result of machine learning. Sometimes machines build their predictions by conducting experiments on us, through what is known as A/B testing. This is when a website will randomly show different headlines or different photos to different people. The website can then track which option is more popular, by counting how many users click on the different choices. A particular type of A/B testing software – called Optimizely — is quite common. Earlier this year, Princeton researchers found Optimizely code on 3,306 websites among 100,000 sites visited. (Optimizely says that its “experimentation platform” has been used to deliver more than 700 billion “experiences.”) The Princeton researchers found the Jawbone fitness tracker website was using Optimizely to target a specific message to users at six geographic locations, and that one software company, Connectify, was using Optimizely to vary the discounts it offered to visitors. During the presidential primaries, the candidates used Optimizely to vary their website colors and photos, according to a study by the news outlet Fusion. “People should be cognizant that what they see on the web is not set in stone,” said Princeton researcher Dillon Reisman. Many news sites, including The New York Times and the New York Post, use Optimizely to evaluate different headlines for news articles. Remy Stern, chief digital officer of the New York Post, said that the website has been using Optimizely to test headlines for several years. Two to five headlines will be randomly shown until the system can determine the most popular headline. “In the old days, editors thought they knew what people wanted to read,” Stern said. “Now we can test out different headlines to see what angle is most interesting to readers.” The Post’s online headlines are totally different than the ones that are crafted each evening for the next morning’s newspaper, he said. The print headlines use a lot of idioms, such as calling the New York City Mayor “Hizzoner,” that don’t work online, Stern said. The New York Times just began testing web headlines on its homepage late last year, said senior editor Mark Bulik. “We can tell which stories on the homepage are not meeting our expectations on readership, so we try to come up with alternatives for those headlines,” he said. Bulik said sometimes the winning headline is obvious within minutes — sometimes it takes as long as an hour for test results to become clear. A good headline can increase readership dramatically. For example, he said, the headline “Thirteen of his family died from Ebola. He lived.” increased readership by 1,006 percent over “Life after a plague destroyed his world.” The winning New York Times headlines are used on the homepage, and increasingly inform editors’ choices for the final headlines for the online article and in the print newspaper, said Carla Correa, social strategy editor for the Times. Correa said that the Times tries to avoid one of the perils of optimizing headlines — the “clickbait” headline that promises more than it delivers. “If we see a headline that we think is misleading to readers, we push back,” she said. To show you how A/B testing works, we’ve gathered headline tests that have run on the websites of The New York Times and the New York Post. And, because New York Post headlines are so much fun — we also built a Twitter bot that automatically tweets out all the headlines that the Post is testing on its stories. Follow it here. 6. How Machines Learn to Be Racist EARLY COMPUTERS were mostly just big calculators, helping us process large numbers. Now, however, computers are so powerful that they are learning how to make decisions on their own in the rapidly growing field of artificial intelligence. But Al-enabled machines are only as smart as the knowledge they have been fed. Microsoft learned that lesson the hard way earlier this year when it released an AI Twitter bot called Tay that had been trained to talk like a Millennial teen. Within 24 hours, however, a horde of Twitter users had retrained Tay to be a racist Holocaust-denier, and Microsoft was forced to kill the bot. This was not the first episode of an AI system learning the wrong lessons from its data inputs. Last year, Google’s automatic image recognition engine tagged a photo of two black people as “gorillas” — presumably because the machine learned on a database that hadn’t included enough photos of either animals or people. The company apologized and said they would fix it. To illustrate how sensitive AI systems are to their information diet, we built an AI engine that deduced synonyms from news articles published by different types of news organizations. We used an algorithm created by Google called word2vec, that is one of the neural nets that Google uses in its search engine, its image recognition tool, and to generate automatic email responses. We trained the synonym picker by having it “read” hundreds of thousands of articles from six different categories of news outlets: • . Left:The Huffington Post and The Nation Right:The Daily Caller and Breitbart News Mainstream:The New York Times and The Washington Post Digital:The Daily Beast and Vox • Tabloids: The New York Post and the New York Daily News ProPublica . . Then we let the synonym picker guess which words appeared to have similar meanings, based on the knowledge it gained from each news database. The varied results generated by each category were striking. Consider the synonyms generated for “BlackLivesMatter.” For the Left-trained AI, “hashtag” was the closest synonym; for the Right-trained AI, it was “AlLivesMatter.” For the AI trained with Digital news outlets, close synonyms were “Ferguson” and “Bernie.” Or consider synonyms for “woman.” In the Tabloids, “victim” ranked high, while in ProPublica (admittedly trained on the smallest amount of data), “knifepoint” ranked as a close synonym. For “man,” the words “son,” “lover” and “gentleman” were ranked about as high on the list of synonyms by news outlets on the Left as “stabs,” “suspect” and “burglar” were by outlets on the Right. And for “abortion,” the Left-trained AI chose “contraception” as a close synonym, while the Right-trained AI chose “parenthood” and “late-term.” The Mainstream-media-trained AI chose “clinics” among its top synonyms. Try it for yourself here. Read all the short articles in the links. They are more like short blog posts. What are the ethical implications in the Management of Information Systems and what can managers do about it
Tired of low grades? We can help you write a successful essay that will boost your grades.
Order With Us Today!

Never use plagiarized sources. Get Your Original Essay on
Discuss the Ethical Implications in the Management of Information Systems and what can Managers do About it.
Hire Professionals Just from $11/Page
Order Now Click here
Open chat
Lets chat on via WhatsApp
Hello, Welcome to our WhatsApp support. Reply to this message to start a chat.