On July 5, the U.S. Ninth Circuit Court of Appeals issued an opinion that found, in part, that sharing passwords can be grounds for prosecution under the Computer Fraud and Abuse Act (CFAA). The decision, according to a dissenting opinion on the case, risks making millions of people who share passwords into “unwitting federal criminals.”
The decision came in the case of David Nosal, an employee at the executive search (or headhunter) firm Korn/Ferry International. Nosal left the firm in 2004 after being denied a promotion. Though he stayed on for a year as a contractor, he was simultaneously preparing to launch a competing search firm, along with several co-conspirators. Though all of their computer access was revoked, they continued to access a Korn/Ferry candidate database, known as Searcher, using the login credentials of Nosal’s former assistant, who was still with the firm.
Nosal was eventually charged with conspiracy, theft of trade secrets, and three computer fraud counts, and was sentenced to prison time, probation, and nearly $900,000 in restitution and fines.
Nosal’s conviction under CFAA hinged on a clause that criminalizes anyone who “knowingly and with intent to defraud, accesses a protected computer without authorization.” Though CFAA is often understood to be an anti-hacking law, that clause in particular has been applied to many cases that fall far short of actual systems tampering.
CFAA has, for instance, been used to prosecute violation of Terms of Service agreements (which are themselves a contested practice). Most notoriously, the law was used to pursue Aaron Swartz, the young programmer who committed suicide after being charged with mass-downloading research papers from an MIT database, in violation of its terms of service—despite the fact that he was then a research fellow at MIT, with authorized access to the involved database.
Read more here: fortune.com/2016/07/10/sharing-netflix-password-crime/
Insurify, a startup out of MIT, today announced the launch of Evia (Expert Virtual Insurance Agent), an artificially intelligent virtual insurance agent that aims to find you better car insurance using a photo of your license plate.
The company recently pulled in $2 million in seed from Rationalwave Capital Partners to create EVIA and launch its robo-agent platform.
“We barely have time to talk to our friends on the phone, nevermind insurance agents. Yet buying insurance is still as bad as buying an airline ticket was 15 years ago, often requiring 40 minutes on the phone with an agent,” Insurify CEO Snejina Zacharia told TechCrunch about why she started the business.
It’s a similar idea to Coverhound, which compares car and property insurance from data on its site as well as Google’s insurance research to bring you a quote it thinks you’ll like.
However, Insurify simplifies the way it gets you that quote by asking you to snap a photo of your license plate and text it to EVIA. The robo-agent then scours millions of records to verify personal information and driving history and then delivers policy quotes and recommendations back to you via text message.
Bigger insurance carriers like Progressive, Allstate and AAA are among Insurify’s offerings. The company also sees startups like Coverhound as a potential partner.
Is it really that novel to simply snap a pic and get a quote via text? This seems to be the “killer” feature that puts Insurify above the rest and it does make the process easier. However, some will have a concern over privacy and security with this method.
What if someone takes a snap of a vehicle that doesn’t belong to them? Insurify told us the plate number sent to EVIA does not contain any private information and if someone takes a picture of a vehicle they do not own, Insurify will give them a quote for that specific vehicle, but won’t tell them who owns that vehicle.
Read more here: techcrunch.com/2016/01/28/mit-spinout-insurify-raises-2-million-to-replace-human-insurance-agents-with-a-robot/
Researchers working in the Netherlands have developed an atomic-scale rewritable data-storage device capable of packing 500 terabits onto a single square inch. Incredibly, that’s enough to store every book written by humans on a surface the size of a postage stamp. Holy shit.
This atomic hard drive, developed by Sander Otte and his colleagues at Delft University, features a storage density that’s 500 times larger than state-of-the-art hard disk drives. At 500 terabits per square inch, it has the potential to store the entire contents of the US Library of Congress in a 0.1-mm wide cube. The new system, described in the latest issue of Nature Nanotechnology, still requires considerable work before it’s ready for prime time, but it’s an important proof-of-principle that lays the groundwork for the development of useable atomic-scale data storage devices.
This isn’t the first time scientists have positioned individual atoms at will. Researchers have been moving atoms using scanning tunneling microscopes since the early 1990s, but current methods are tedious and show, requiring tremendous patience and persistence. The new system, while still a bit slow, is a huge improvement in user friendliness.
To make it work, Otte and team placed chlorine atoms on a copper surface, resulting in a perfect square grid. Importantly, a hole appears on this grid whenever an atom is missing. As we all know, this type of on/off type configuration lends itself well to binary switching—the foundation of digital data storage. Using the sharp needle of a scanning tunneling microscope, the researchers were able to probe the atoms one by one, and even drag individual atoms towards a hole.
“The combination of chlorine atoms and supporting copper crystal surface that we found now, combined with the fact that we manipulate ‘holes’—just as in a sliding puzzle—makes for a much more reliable, reproducible and scalable manipulation technique that can easily be automated,” explained Otte to Gizmodo. “It is as if we have invented the atomic scale printing press.”
Read more here: gizmodo.com/record-setting-hard-drive-writes-information-one-atom-a-1783740015
Last month, law enforcement officers showed up at the lab of Anil Jain, a professor at Michigan State University. Jain wasn’t in trouble; the officers wanted his help.
Jain is a computer science professor who works on biometric identifiers such as facial recognition programs, fingerprint scanners and tattoo matching; he wants to make them as difficult to hack as possible. But the police were interested in the opposite of this: they wanted his help to unlock a dead man’s phone.
Jain and his PhD student Sunpreet Arora couldn’t share details of the case with me, since it’s an ongoing investigation, but the gist is this: a man was murdered, and the police think there might be clues to who murdered him stored in his phone. But they can’t get access to the phone without his fingerprint or passcode. So instead of asking the company that made the phone to grant them access, they’re going another route: having the Jain lab create a 3D printed replica of the victim’s fingers. With them, they hope to unlock the phone.
Arora described how this works to me. The police already have a scan of the victim’s fingerprints taken while he was alive (apparently he had been arrested previously). They gave those scans to the lab, and using them Arora has created 3D printed replicas of all ten digits.
“We don’t know which finger the suspect used,” he told me by phone. “We think it’s going to be the thumb or index finger—that’s what most people use—but we have all ten.”
A 3D printed finger alone often can’t unlock a phone these days. Most fingerprint readers used on phones are capacitive, which means they rely on the closing of tiny electrical circuits to work. The ridges of your fingers cause some of these circuits to come in contact with each other, generating an image of the fingerprint. Skin is conductive enough to close these circuits, but the normal 3D printing plastic isn’t, so Arora coated the 3D printed fingers in a thin layer of metallic particles so that the fingerprint scanner can read them.
It’s not a foolproof method yet. Arora is still refining the technology, and they haven’t yet given the fingers back to the police to try and unlock the victim’s phone. But Arora said that in a few weeks, once he’s tested the fingers enough in the lab, he’ll hand them over. Then the police will try to use 3D printed models of a dead man’s fingers to unlock his phone.
The security and privacy of phones has been a heated topic in the news lately. You probably remember that Apple and the FBI went back and forth in court over gaining access to the iPhone of the deceased San Bernardino shooter; it was locked with a passcode. This case is a bit different because the cops don’t need a phone company’s help. And the fact that the owner of the phone is dead eliminates some of the legal issues that would usually arise, said Bryan Choi, a researcher who focuses on issues of security, law and technology.
“The Fifth Amendment protects against self-incrimination. Here, the fingerprints are of the deceased victim, not the murder suspect. Obviously, the victim is not at risk of incrimination,” Choi said by email. And even if law enforcement found evidence of other crimes on the phone, the victim is dead, so it’s not like they’d be bringing him to trial anyway.
Where it gets more murky, and more interesting, is thinking about whether this kind of technology can and should be used in other cases, involving living suspects. If this works, to get into someone’s phone locked by a thumbprint, cops would just need the person’s fingerprints… and a court order: In 2013, the Supreme Court ruled that police need to have a warrant to search the contents of a personal cell phone.
Read more here: fusion.net/story/327145/3d-print-dead-mans-fingers-to-unlock-his-phone/
Unique online ‘fingerprints’ left behind while using your device in public locations can reveal your identity, and can be used to track your movements, say scientists who are trying to find ways to protect against the fingerprinting of personal computers.
People leave behind online browser “fingerprints” at each location they visit on their internet browser, researchers said.
Almost like a regular fingerprint, a person’s browser fingerprint – or “browserprint” – is often unique to the individual. Such a fingerprint can be monitored, tracked and identified by companies and hackers.
Researchers at the University of Adelaide in Australia are working to find new methods of protecting against the fingerprinting of personal computers – and are now giving members of the community the chance to see their own computer browserprint.
“Fingerprinting on computers is invisible to most people but there are companies out there who are already using these techniques to learn more information about individuals, about their interests and their habits,” said Lachlan Kang, PhD student at the University of Adelaide.
“This can be quite a powerful information to have, especially if it’s used to tailor advertising to you,” Kang said.
In countries that are less benign, it could also be used to spy on people, he said.
“Computer users generally are growing in awareness of privacy issues, but currently there’s little that can be done to counter fingerprinting,” Kang said.
“This is because fingerprints build up in between the websites you’re visiting – your browsing history and personal information can be pooled in the gaps between those websites,” he said.
“Simply clearing your browsing history won’t make any difference to this, because the information is already out there,” he added.
Kang is seeking the public’s help to better understand which fingerprinting techniques are the most powerful, so that he can help to build defences against them.
Read more here: indianexpress.com/article/technology/science/computer-fingerprints-may-give-out-your-identity-location-2923593/
Scientists in the Netherlands have succeeded in writing data at the smallest scale ever, manipulating chlorine atoms one at a time to store a kilobyte of data in what's being called the world's 'smallest hard disk'.
Taking their inspiration from famous physicist Richard Feynman — who back in 1959 envisioned that one day individual atoms could be arranged to store information — the researchers actually coded a section of Feynman's speech on the topic into their atomic kilobyte.
According to the team from the Kavli Institute of Nanoscience at Delft University, writing data at this incredibly small scale — 1 kilobyte (8,000 bits) recorded in an area just 96 nanometers (nm) wide and 126 nm tall — enables a storage density of 500 terabits per square inch (Tbpsi), which is 500 times better than the capabilities of the best hard drives we use today.
"In theory, this storage density would allow all books ever created by humans to be written on a single post stamp," says lead researcher Sander Otte.
To create their record-setting data mechanism, the researchers used a scanning tunneling microscope (STM), a tool that enables scientists to image and manipulate material at the atomic level. With the probe, they were able to physically arrange chlorine atoms on a copper plate, moving them one at a time to make up blocks of memory consisting of 64 bits, encoded in binary patterns that work much like miniature QR codes.
"You could compare it to a sliding puzzle," says Otte. "Every bit consists of two positions on a surface of copper atoms, and one chlorine atom that we can slide back and forth between these two positions. If the chlorine atom is in the top position, there is a hole beneath it — we call this a 1. If the hole is in the top position and the chlorine atom is therefore on the bottom, then the bit is a 0."
Because the technique enables data to be written at such an incredibly reduced scale compared to today's storage devices, it could hypothetically offer a massive boost in storage efficiency — shrinking the massive data centers that house our information in the cloud, and enabling consumer gadgets to become even more miniaturized.
But due to the coldness requirements for the memory to function, it may still be a while yet before your Spotify or Netflix streams to you courtesy of chlorine.
"In its current form the memory can operate only in very clean vacuum conditions and at liquid nitrogen temperature (77 K, which is –196 degrees Celsius or –321 degrees Fahrenheit)," Otte explains, "so the actual storage of data on an atomic scale is still some way off. But through this achievement we have certainly come a big step closer."
Read more here:
A serious problem in the Turing test — a test posited by the famed British mathematician and computer scientist Alan Turing which, if passed, would prove that machines could think — is exposed in a study conducted by Prof. Kevin Warwick and Dr. Huma Shah of Coventry University, UK.
The Turing test assesses a machine’s ability to exhibit intelligent behavior indistinguishable from that of a human.
“In his 1950 paper, Alan Turing wished to consider the question, ‘Can machines think?’ Rather than get bogged down by definitions of both of the words ‘machine’ and ‘think’ he replaced the question with one based on a much more practical scenario, namely his imitation game,” the authors said.
“Turing himself described the game in these terms: ‘The idea of the test is that a machine has to try and pretend to be a man, by answering questions put to it, and it will only pass if the pretence is reasonably convincing.”
In its standard form, the Turing test is described as an experiment that can be done in two different ways: (i) one-interrogator-one hidden interlocutor; (ii) one-interrogator-two hidden interlocutors.
In both cases, a machine must provide ‘satisfactory’ and ‘sustained’ answers to any questions put to it by the human interrogator.
“However, what about in the theoretical case when the machine takes the fifth amendment: No person shall be held to answer?”
“If a machine were to ‘take the fifth amendment’ – that is, exercise the right to remain silent throughout the test – it could, potentially, pass the test and thus be regarded as a thinking entity,” the scientists said.
The study looks at transcripts of a number of conversations from actual Turing tests in which the hidden machine remained silent.
In each case, the human judge was unable to say for certain whether they were interacting with a person or a machine.
Thus, a machine could potentially pass the Turing test simply by remaining silent.
The judge would be unable to determine whether the silent entity was a human choosing not to answer the questions, a smart machine that had decided not to reply, or a machine experiencing technical problems that prevented it from answering (as was actually the case in the transcripts studied).
“This begs the question, what exactly does it mean to pass the Turing test? Turing introduced his imitation game as a replacement for the question ‘Can machines think?’ and the end conclusion of this is that if an entity passes the test then we have to regard it as a thinking entity,” Prof. Warwick said.
“If an entity can pass the test by remaining silent, this cannot be seen as an indication it is a thinking entity, otherwise objects such as stones or rocks, which clearly do not think, could pass the test.”
Read more here: www.sci-news.com/othersciences/computerscience/major-flaw-turing-test-04000.html
Police and car insurers say thieves are using laptop computers to hack into late-model cars’ electronic ignitions to steal the vehicles, raising alarms about the auto industry’s greater use of computer controls.
The discovery follows a recent incident in Houston in which a pair of car thieves were caught on camera using a laptop to start a 2010 Jeep Wrangler and steal it from the owner’s driveway. Police say the same method may have been used in the theft of four other late-model Wranglers and Cherokees in the city. None of the vehicles has been recovered.
If you are going to hot-wire a car, you don’t bring along a laptop,” said Senior Officer James Woods, who has spent 23 years in the Houston Police Department’s auto antitheft unit. “We don’t know what he is exactly doing with the laptop, but my guess is he is tapping into the car’s computer and marrying it with a key he may already have with him so he can start the car.”
The National Insurance Crime Bureau, an insurance-industry group that tracks car thefts across the U.S., said it recently has begun to see police reports that tie thefts of newer-model cars to what it calls “mystery” electronic devices.
“We think it is becoming the new way of stealing cars,” said NICB Vice President Roger Morris. “The public, law enforcement and the manufacturers need to be aware.”
Read more here: www.wsj.com/articles/thieves-go-high-tech-to-steal-cars-1467744606
People have always dreamed about going beyond the limitations of their bodies: the pain, illness and, above all, death.
Now a new movement is dressing up this ancient drive in new technological clothes.
Referred to as transhumanism, it is the belief that science will provide a futuristic way for humans to evolve beyond their current physical forms and realise these dreams of transcendence.
Perhaps the most dramatic way transhumanists believe that technology will transform the human condition is the idea that someone's mind could be converted into digital data and 'uploaded' into an immensely powerful computer.
This would allow you to live in a world of unbounded virtual experiences and effectively achieve immortality (as long as someone remembers to do the backups and doesn't switch you off).
Yet transhumanists seem to ignore the fact that this kind of mind-uploading has some insurmountable obstacles.
The practical difficulties mean it couldn't happen in the foreseeable future, but there are also some more fundamental problems with the whole concept.
The idea of brain uploading is a staple of science fiction.
The author and director of engineering at Google, Ray Kurzweil, has perhaps done the most to popularise the idea that it might become reality – perhaps as soon as 2045.
Recently, the economist Robin Hanson has explored in detail the consequences of such a scenario for society and the economy.
He imagines a world in which all work is carried out by disembodied emulations of human minds, running in simulations of virtual reality using city-size cloud computing facilities.
It's a short step from the idea that our minds could be uploaded, to the notion that they already have been and that we are already living in a Matrix-style computer simulation.
Read more: http://www.dailymail.co.uk/sciencetech/article-3675561/Would-upload-brain-computer-Experts-reveal-live-forever-digitally.html
The American Civil Liberties Union filed a lawsuit against the U.S. Attorney General on June 29, arguing that a section of the Computer Fraud and Abuse Act unconstitutionally criminalizes research aimed at determining whether online algorithms result in discrimination against certain races, genders, and other minority groups.
“The work of our clients has a clear social benefit and is protected by the First Amendment,” said Esha Bhandari, staff attorney with the ACLU Speech, Privacy, and Technology Project and an attorney on the case. “This law perversely grants businesses that operate online the power to shut down investigations of their practices.”
The law criminalizes the violation of a website’s terms of service, which often prohibit the creation of multiple or dummy accounts, as well as the automated collection of publicly available data such as ads and search results. Researchers, however, use these fake accounts to determine whether websites’ algorithms are serving up content based on discriminatory demographics.
“Companies employ sophisticated computer algorithms to analyze the massive amounts of data they have about Internet users. This use of ‘big data’ enables websites to steer individuals toward different homes or credit offers or jobs—and they may do so based on users’ membership in a group protected by civil rights laws,” Bhandari and Rachel Goodman, a staff attorney for the ACLU Racial Justice Program and attorney on the case, explain in a blog post about the lawsuit.
The ACLU argues that physically conducted investigations of a comparable nature, such as testing whether similar individuals of different races are offered different housing options, are encouraged by Congress and the courts. This law, however, restricts such investigations when they are conducted online.
The Federal Trade Commission (FTC) has already acknowledged that there is discrimination potential in the use of big data, hosting a public workshop in September 2014 to address just such concerns.
“Workshop participants and others have noted how potential inaccuracies and biases might lead to detrimental effects for low-income and underserved populations,” a January 2016 FTC report said. “For example, participants raised concerns that companies could use big data to exclude low-income and underserved communities from credit and employment opportunities.”
Read more here: www.meritalk.com/articles/aclu-files-suit-over-the-computer-fraud-and-abuse-act/