-->

Wednesday, May 9, 2018

author photo

Technology - Google News


Google's AI sounds like a human on the phone — should we be worried?

Posted: 09 May 2018 08:12 AM PDT

It came as a total surprise: the most impressive demonstration at Google's I/O conference yesterday was a phone call to book a haircut. Of course, this phone call was different. It wasn't made by a human, but by the Google Assistant, which did an uncannily good job of asking the right questions, pausing in the right places, and even throwing in the odd "mmhmm" for realism.

The crowd was shocked, but the most impressive thing about the call was that the person on the other end didn't seem to suspect they were talking to AI at all. This is a huge technological achievement for Google, but it also opens up a Pandora's box of ethical and social challenges.

For example, does Google have an obligation to tell people they're talking to a machine? Does technology that mimics humans erode our trust in what we see and hear? And is this another example of tech privilege, where those in the know can offload the boring conversations they don't want to have onto a machine, while those receiving the calls (most likely low-paid service workers) have to deal with some idiot robot?

In other words, this was a typical Google demo: equal parts wonder and worry.

But let's start with the basics. Onstage, Google didn't talk much about the details of how the feature, called Duplex, works, but an accompanying blog post adds some important information. First, Duplex isn't some futuristic AI chatterbox, capable of open-ended conversation. As Google's researchers write, it can only converse in "closed domains" — exchanges that are functional, with strict limits on what is going to be said. You want a table? For how many? On what day? And what time? Okay, thanks, bye. Easy!

Mark Riedl, an associate professor of AI and storytelling at Georgia Tech, told The Verge that he thought Google's Assistant would probably work "reasonably well," but only in these formulaic situations. "Handling out-of-context language dialogue is a really hard problem," Riedl told The Verge. "But there are also a lot of tricks to disguise when the AI doesn't understand or to bring the conversation back on track."

One of Google's demos showed perfectly how these tricks work. The AI was able to navigate a series of misunderstandings but did so by rephrasing and repeating questions. This sort of thing is common with computer programs designed to talk to humans. Snippets of their conversation seem to show real intelligence, but when you analyze what's actually being said, it turns out they're just preprogrammed gambits. Google's blog post offers some fascinating details on this, spelling out some of the ticks Duplex will use. These include elaborations ("for next Friday" "for when?" "for Friday next week, the 18th."), syncs ("can you hear me?"), and interruptions ("the number is 212-" "sorry, can you start over?").

It's important to note that Google is calling Duplex an "experiment." It's not a finished product, and there's no guarantee it'll be widely available in this form, or widely available at all. (See also: the real-time translation feature Google showed off for its Pixel Buds last year. It worked flawlessly onstage, but it was hit-and-miss in real life and only available to Pixel phone owners.) Duplex works in just three scenarios at the moment: making reservations at a restaurant; scheduling haircuts; and asking businesses for their holiday hours. It will only be available to a limited (and unknown) number of users sometime this summer.

One more big caveat: if the call goes wrong, a human takes over. In its blog post, Google says Duplex has a "self-monitoring capability" that allows it recognize when the conversation has moved beyond its capabilities. "In these cases, it signals to a human operator, who can complete the task," says Google. This is similar to Facebook's personal assistant M, which promised to use AI to deal with similar customer service scenarios but ended up outsourcing an unknown amount of this work to humans. (Facebook closed this part of the service in January.)

All this gives us a clearer picture of what Duplex can do, but it doesn't begin to answer the questions of what effects Duplex will have. And as the first company to demo this tech, Google has a responsibility to face these issues head-on.

The obvious question is, should the company notify people that they're talking to a robot? Google's vice president of engineering, Yossi Matias, told CNET it was "likely" this would happen. Speaking to The Verge, Google said it definitely believes it has a responsibility to inform individuals.

Many experts in this domain agree, although how to notify the human on the call is tricky. If Assistant starts its calls by saying "hello, you're speaking to a robot" then the receiver is likely to just hang up. More subtle indicators could mean limiting the realism of the AI's voice or including a special tone during calls. Google tells The Verge it hopes a set of social norms will organically evolve that make it clear when the caller is an AI.

Joanna Bryson, an associate professor at the University of Bath who studies AI ethics, told The Verge that Google has on obvious obligation to disclose this information. If robots can freely pose as humans the scope for mischief is incredible, ranging from scam calls to automated hoaxes. (Imagine getting a panicked phone call from someone saying there was a shooting nearby. You ask them a few questions, they answer — enough to convince you it's a real person — then hang up, saying they got the wrong number.)

But Bryson letting companies manage this themselves won't be enough, and there will need to be new laws introduced to protect the public. "Unless we regulate it, some company in a less conspicuous position than Google will take advantage of this technology," says Bryson. "Google may do the right thing but not everyone is going to."

And if this technology becomes widespread, it will have other, more subtle effects, the type which can't be legislated against. Writing for The Atlantic, Alexis Madrigal suggests that small talk — either during phone calls or conversations on the street — has an intangible social value. He quotes urbanist Jane Jacobs, who says "casual, public contact at a local level" creates a "web of public respect and trust." What do we lose, then, if we give people another option to avoid social interactions, no matter how minor? If these calls disappear altogether, as AI starts placing them and receiving them, do we lose anything important?

One effect might be making us all a little bit ruder. If we can't tell the difference between humans and AI on the phone, will we treat all phone calls more suspiciously? We might start cutting off real people, telling them: "Just shut up and let me speak to a human." And if it becomes easier for us to book reservations at a restaurant, might we take advantage of that fact and book them more often, then care less when we can't show up? (Google told The Verge it would limit the number of daily calls a business could receive from Assistant, and the number of calls Assistant could place, in order to stop people from using the service for spam.)

There are no obvious answers to these questions, but as Bryson points out, Google is at least doing the world a service by bringing attention to this technology. It's not the only company developing these services, and it certainly won't be the one to use them. "It's a huge deal that they're showcasing it," says Bryson. "It's important that they keep doing demos and videos so people can see this stuff is happening [...] What we really need is an informed citizenry."

In other words, we need to have a conversation about all this before the robots start doing the talking for us.

Let's block ads! (Why?)

Recode Daily: Facebook's massive reorg keeps user privacy and security top of mind

Posted: 09 May 2018 06:22 AM PDT

Facebook is making the biggest executive shuffle in its 15-year history.WhatsApp, Messenger and Facebook’s core app are getting new leaders as part of a massive executive reorg. CEO Mark Zuckerberg has reorganized the social giant’s product and engineering organizations into three main divisions, including a new “Family of apps” group run by Chief Product Officer Chris Cox, the executive previously in charge of the core Facebook app. Cox will now oversee Facebook, Instagram, WhatsApp and Messenger, four social apps with a combined reach of more than five billion monthly users. A “New platforms and infra” group will be managed by CTO Mike Schroepfer, and will include a team dedicated to blockchain tech, along with Facebook’s AI, VR and AR efforts. A third division called “Central product services,” headed by Javier Olivan, includes all the shared features that operate across multiple products or apps, such as ads, security and growth. [Kurt Wagner / Recode]

[Want to get the Recode Daily in your inbox? Subscribe here.]

Facebook is also creating a new team focused on building privacy products. And Chris Daniels was named the new VP of Facebook-owned WhatsApp, just a week after its co-founder and CEO Jan Koum announced he was leaving. [Kurt Wagner / Recode]

Amazon employees are outraged by the company’s opposition to a plan to formally consider women and minority candidates when selecting new board members. Tensions surfaced last week on an employee email thread following a proposal by shareholders that Facebook implement a “Rooney Rule,” akin to the rule that requires NFL teams to interview at least one minority candidate for head coaching and general manager openings. All 10 directors of Amazon’s board are white; seven are men and three are women. [Jason Del Rey / Recode]

Comcast is preparing to bid for the media assets that 21st Century Fox already agreed to sell to Disney. But Comcast CEO Brian Roberts, who first bid for Fox assets last fall, won’t make a new move unless a federal judge approves AT&T’s plan to buy Time Warner; a decision in that case is due by June 12. Meanwhile, 21st Century Fox CEO James Murdoch is clarifiying his post-merger plans: He wants to become a VC. [Greg Roumeliotis and Liana B. Baker / Reuters]

Workplace messaging company Slack has eight million daily users and three million paid users. CEO Stewart Butterfield said that annual revenue is about $300 million; he said the company’s planned IPO will not happen this year. [Rolfe Winkler / The Wall Street Journal]

Here’s a darkly comic look inside Univision, written by some of its employees, who argue that the company has been in decline for years due to corporate raiding, complacency, excess and incompetence. [Kate Conger, David Uberti and Laura Wagner / Gizmodo]

Recode Presents ...

“Chaos Monkeys” author and former Facebook ad targeting manager Antonio García-Martínez will join Kara Swisher on this week’s Too Embarrassed to Ask podcast. Got questions for Antonio about Facebook or anything else? Send them in to TooEmbarrassed@recode.net or tweet them with #TooEmbarrassed!

Top stories from Recode

Facebook added Jeff Zients, the former director of the National Economic Council, to its board of directors.

Zients, who is the CEO of Cranemere Group Limited, will officially join the board at the end of the month.

Match Group says Facebook’s new dating feature will have “no negative impact on Tinder.”

“[People don’t] want to mix Facebook with their dating lives,“ says Match CEO Mandy Ginsberg.

The FAA will have “zero tolerance” for anything less safe than current standards when it comes to regulating flying cars.

And there were zero fatalities in commercial airline crashes in 2017.

Glassdoor, the iconic job-hunting website, has been bought for $1.2 billion.

The buyer: Recruit Holdings, a Japanese human resources company that owns multiple job sites.

If she were Mark Zuckerberg, Patagonia CEO Rose Marcario says she might have temporarily shut down Facebook.

On the latest episode of Recode Decode, Marcario says, “After this huge thing happens — our country gets attacked — I think the customers would have been like, ‘Okay! That makes me feel like you’ve got it!’”

This is cool

The happiness curve: Why life gets better after 50.

Let's block ads! (Why?)

Two more reasons I'll never switch from the iPhone to Android

Posted: 09 May 2018 07:06 AM PDT

Earlier this week, I wrote a piece about why I won’t switch from the iPhone to Android anytime soon. In fact, the odds are fairly good that I’ll never switch from iOS to Google’s mobile platform, even down the road in the future. When I discuss this topic on the site, I typically point to a few main reasons for my decision. As an iPhone user since the very first model was released in 2007, I’m definitely locked into Apple’s ecosystem. But the fact of the matter is that I enjoy Apple’s ecosystem and its products. The overall quality of the iOS user experience is vastly smoother and more refined than Android, even when considering the current messy state of iOS 11. Apple’s mobile and desktop platforms also feature much more streamlined integration than Google’s Android and Chrome OS platforms. Google is still where I turn for most of my services, and I still dream of a day when Google has completely taken over my iPhone. But Apple’s hardware is just too good, and its platforms and user experiences are just too clean to abandon.

In the piece I wrote this week, I covered another reason I stay away from Android and stick with the iPhone. Android fans often complain about Apple’s “walled garden” that restricts third-party apps from accessing many core OS functions and therefore offering many features seen as crucial to the Android experience. Of course, those same policies prevent things like this scary new mega, monster, mutant Android malware from being possible on iOS devices. Now, on Wednesday morning, two new stories popped up that cover even more reasons I stick with the iPhone and won’t ever jump ship to Android.

Google I/O 2018 kicked off on Tuesday and Google held its big keynote yesterday at 10:00 AM. The company covered a ton of interesting new features coming to Android, including a few things that are nothing short of mind-blowing. Of course, the beauty of Google is that it’s a software company first, so most of the biggest things covered during Google I/O 2018 will be made available on iOS devices as well, like whatever end user services are born of Google Duplex, which was easily the most impressive thing Google showed off on Tuesday.

While the web rightfully continues to buzz about Google and its announcements from the event, two things are happening over on the Apple side of the fence that serve as fresh reminders of yet another way Apple’s platforms offer key benefits over Android.

First comes the news that Apple is cracking down on apps that share location data with third parties. Policies preventing these practices have been in place in Apple’s iOS developer guidelines for quite some time. Renewed public interest in the topic has resulted from the Facebook-Cambridge Analytica scandal and Europe’s new General Data Protection Regulation (GDPR) though, and now Apple is taking important steps to protect users.

Apple is clearly in the midst of an audit of third-party apps to ensure that this policy is adhered to, and in the end it’ll be iPhone and iPad users who benefit.

The second news item that helps illustrate Apple’s customer-first stance is something that was found hiding in the code in Apple’s latest developer beta, iOS 11.4 beta 4. It’s called “USB Restricted Mode” and it first appeared in an iOS 11.3 beta, but was removed before the software was released to the public. Apple likely had some wrinkles to iron out before it was ready for primetime, and now it appears as though USB Restricted Mode will debut in the release version of iOS 11.4.

What is USB Restricted Mode? Here’s an excerpt from a post on the Elcomsoft blog:

The functionality of USB Restricted Mode is actually very simple. Once the iPhone or iPad is updated to the latest version of iOS supporting the feature, the device will disable the USB data connection over the Lightning port one week after the device has been last unlocked.

At this point, it is still unclear whether the USB port is blocked if the device has not been unlocked with a passcode for 7 consecutive days; if the device has not been unlocked at all (password or biometrics); or if the device has not been unlocked or connected to a trusted USB device or computer.

If you’re still unclear, this is a shot fired directly at the NSA, FBI, and other intelligence and law enforcement agencies. That’s right, Apple is protecting user data not just from hackers and malware, but also from intelligence agencies that regularly break into devices in order to obtain and analyze private data.

Devices like the notorious Graykey encryption-breaking box are used by law enforcement to crack PINs and passcodes using brute force attacks. With Apple’s new USB Restricted Mode enabled, however, these devices would no longer be able to connect to iPhones and iPads that have been locked for at least seven days.

Protections like this are as much a marketing ploy as they are a service to users. Apple’s continued message of “we’re not Google and Facebook, we care about your privacy and we don’t sell your data” has and will continue to be instrumental. But the end result is the same as it would be if Apple were truly pure of heart, and I know my personal data is protected on my iPhone far more securely that it would be on Android.

Let's block ads! (Why?)

This post have 0 komentar


EmoticonEmoticon

Next article Next Post
Previous article Previous Post