Showing posts with label Privacy. Show all posts
Showing posts with label Privacy. Show all posts

Saturday, August 18, 2012

Germany declares Facebook facial recognition illegal

Originally posted 08/04/2011 on lubbockonline.com

It looks like Facebook is learning the lesson Walmart learned when it comes to doing business in Germany. Germany is not the U.S. Matthew Shaer reports in the Christian Science Monitor that Facebooks facial recognition 'feature' has been declared illegal in Germany.

I don't know how much affect this will actually have on Facebook. It will depend on what kind action Germany decides to take and Facebook's response. Honestly, even if Germany successfully blocked Facebook, would Facebook care? The German government might feel the pressure more than Facebook. There will probably be some type of compromise, but I honestly don't see Facebook giving up it's facial recognition software completely.

Online safety just takes a little common sense

Originally posted 08/02/2011 on lubbockonline.com

Google has backed off the 'real name' on Google+ policy. That's good, but it takes both more and less than pseudonyms to be safe online. The internet is a wild and wooly place. If you're not careful you can reveal a lot more of yourself than you intend.

So what kinds of things do you need to do to protect yourself? It's not very hard. There are three simple things you can do that will help you stay safe:

    • You can't take it back. Once you put something online it's out there. You can delete every copy you can find, but you'll never know who read it or how many copies were made of it.
    • The internet is not the place to put your darkest secrets. If there is ANYONE you would not want to see what you are about to post, don't post it.
    • Be careful what you put online. Without even trying we are far more exposed after a few days (maybe even hours) of online activity. When you tell about your pets, your childhood, you hobbies, you are giving people information that can be used to attack you.

It's very easy to give up too much information online, but you can protect yourself without having to sever all ties to the online world. Just use a little common sense makes your online experience more fun and a lot safer.

Randi Zuckerberg: Anonymity on the Internet has to go away

Originally posted 07/29/2011 on lubbockonline.com

Bianca Bosker at the Huffington post tells us that Randi Zuckerberg, Facebook marketing director and Mark Zuckerberg's sister, believes anomymity on the internet has to go away. Bianca quotes Randi as saying:

“I think anonymity on the Internet has to go away,” she said during a panel discussion on social media hosted Tuesday evening by Marie Claire magazine. “People behave a lot better when they have their real names down. … I think people hide behind anonymity and they feel like they can say whatever they want behind closed doors.

Miss Zuckerberg also alleges that requiring people to use their real names will end cyberbullying. Apparently she was never bullied on the playground growing up. For that matter, she must not have paid attention to things going on on Facebook the last few years.

The Toronto Sun reports that a 16 year old girl will be sentenced on August 15th for stabbing another 16 year old girl after making threats on Facebook. After stabbing the other girl she went home and threatened to do worse if the other girl messed with her again.

A little over a month ago I told you about Jason Valdez, who holed up in a motel room with a (maybe) hostage and talked about his police standoff on Facebook while friends and family informed him of police movements. Jason's account was in his real name, and I'm pretty sure most of the others were, too.

And for good measure, we have London Eley, who I told you about a week earlier than Mr. Valdez. Miss Eley tried to hire a hitman on Facebook. Her Facebook account is down now, but I visited it when I first read the story. It was easy to find because she used her real name.

While using real names can make it easier to find people who are doing wrong, using real names does little to prevent bad behavior. Requiring real names on Facebook has not stopped bad behavior. It would be hard to say real names have even slowed bad behavior down. Why would anyone expect "real name only" policies to work any better on the internet at large than they do on Facebook?

Google deleting/suspending users for using handles

Originally posted 07/26/2011 on lubbockonline.com

Google+ (g+)has gotten off to a good start. Unfortunately it's hit a rocky spot that could, if not handled properly, destroy the momentum it's enjoyed so far. Rafal Los (aka Wh1t3Rabbit) reports that the hacker community is in an uproar. Apparently Google has been disabling the accounts of people who are using handles instead of their real names. Some people have lost everything they had in Google. This is a problem because Google's terms of service apparently don't say you have to use your real name, but to use the name your friends know you buy. I'm not a hacker but, like Rafal Los, I am acquainted with people who do not use their real names. I've never known their real names, and if they contacted me using their real names I might ignore it because I wouldn't know who it was. The other side of the problem is, Google is biting the hands of the early adopters. As I said a few days ago, the majority of Google+ early adopters are male, and as far as I can tell, in the technology sector. A large number of those use handles, and don't appreciate being told they have to use their real names. People who have been using Gmail since it's inception also don't appreciate losing all their archived emails and other data. How many people will turn away from g+ rather than wrestle with Google over usernames?

But it may not be as simple as that. According to Peter Smith at ITWorld there appear to be two types of account issues: Accounts being suspended for naming violations, and accounts being suspended for violations of the terms of service. Naming convention violatiosn are relatively minor. You can still get your Gmail and you just have to prove that poeple normally know you by your username, either through links or ID's. But the TOS violations are worse. You are not given a clear description of the violation, and you lose access to your entire Google account.

Another article on itworld.com, this one by Juan Carlos Perez, talks about the different groups being affected by the account deletions, which include people who have unusual names and people who don't want to use their real names for privacy reasons. As quoted in the article, Google's response seems to be:

Asked for comment, a Google spokeswoman said via e-mail that Google Profiles are designed to be public Web pages whose purpose is to "help connect and find real people in the real world." "By providing your common name, you will be assisting all people you know -- friends, family members, classmates, co-workers, and other acquaintances -- in finding and creating a connection with the right person online," she wrote.

Google claims to want to make it easier to connect to people we know. Ironically, they are undermining one of the goals they had when they put Google+ into private Beta. The goal for us to connect to each other the way we do in real life. In real life online (and sometimes off) people use handles, nicknames, pseudonyms, whatever you want to call them. If Google really wants people to be connecting on Google+ the way they do in real life, and they aren't concerned with things like being able to accurately identify people, then Google is being disengenious. In fact, if someone is using a handle and has followers, especially if they have dozens or more, then their username and profile are helping to, "connect and find real people in the real world." There is no problem, and Google has no reason to suspend accounts because the creator used a handle.

Facebook Friday: Teacher trouble

Originally posted 07/08/2011 on lubbockonline.com

People never learn. Facebook is not a private place. You have more privacy in the local pub than on Facebook. Assuming no one posts they saw you there on Facebook. Or tweets it. But people still insist on treating it as a private forum. Winnie Hu of the NY Times tells us that a teacher in New Jersey is on (paid) administrative leave after complaints that she posted that she felt like she was a warden over future convicts on Facebook.

I wish I could say this was the first time, or at least unusual. But for some reason teachers seem to be particularly susceptible to the keyboard equivalent of loose lips. From teachers posting questionable pictures to detailing their religious conflicts with their students, teachers are the epitomy of too open on Facebook.

This is a situation that will only get worse unless something changes. Privacy and the control of individuals personal and identifying information will continue to move from the individual to third parties who may have no interest in protecting the individual or his data. That is something we should all be up in arms over.

Thursday, August 9, 2012

Is the FBI an agency out of control?

Originally posted 006/30/2011 on lubbockonline.com

Kevin Gosztola at Alternet.org looked at 5 types of FBI abuse of power. That abuse of power was, and is, assisted by the FISA court. The FISA court is supposed to oversee the FBI investigations, but unless oversight means rubberstamping electronic surveillance (1506 requests in 2011, 1506 approved) it's falling down on the job.

The court also granted "National Security Letters" on 14,000 people. National security letters pretty much give the FBI full access to your life:

They were also generous with granting “national security letters," which allow the FBI to force credit card companies, financial institutions, and internet service providers to give confidential records about customers’ subscriber information, phone number, email addresses and the websites they’ve visited. The FBI got permission to spy on 14,000 people in this way. Do they really think there are 14,000 terrorists living in the US?

With that backdrop, Kevin tells us that the FBI is seeking greater investigative power, and tells us of 5 types of investigations that show the last thing the FBI needs is more power:

  1. Warrantless GPS tracking (I blogged about this last year)
  2. FBI Targeting WikiLeaks and Bradley Manning Supporters. The FBI intimidated peole involved with the "Bradley Manning Support Network," a legal grassroots organization, for one.
  3. FBI Spied on Children While Using 'Roving Wiretaps,' Intentionally Misled Courts on Freedom of Information Act Requests. Comparing documents from different FOIA requests discovered the deception.
  4. FBI Entrapment of Muslims.
  5. The Criminalization of Travel by the FBI. Vocal activists (not terrorists) are targeted because of disagreement with policy and travel abroad.

I think you should go read the whole article. It's 6 pages, but they're short, and the details he provides are compelling. The last point strikes me a little harder than the others because if I travelled internationally, I could be one of the people targeted. As it is I'm just a harmless crank who blogs in Lubbock, TX and occasionally emails congressmen and the President on issues I feel strongly about. But how long before that isn't enough to protect me from harassment?

Google tries privacy friendlier attack on Facebook

Originally posted 006/29/2011 on lubbockonline.com

Yesterday on the Official Google Blog a new social networking experience was announced. Dubbed Google+ it's similar to Facebook and Myspace in some ways, but if it works as advertised, it will give more control over privacy. You will be able to segment your friends the way you do in real life in 'circles' that won't have any connections you don't want them to have. At this point Google+ is invite only, so it's too early to tell, but it looks like it has the potential to be a winner.

Here are links to a more indepth stories by people who have already been invited into Google+:

The Epicenter Blog at Wired.com

News & Opinion at PCMag.com

The New York Times Inside Technology

Wednesday, August 8, 2012

Should google release a website vulnerability scanner?

Originally posted 006/23/2011 on lubbockonline.com

Google announced it's experimental Chrome extension, DOM Snitch, to expose vulnerabilities on websites. It's intended for site developers to test their sites from the browser end. As Radoslav Vasilev notes in the blog, most site testing tools focus on the server side of testing. DOM Snitch let's you see what's going on from the browser side.

I like this. I think it'll be pretty cool to go to websites and find out what kinds of vulnerabilties they have. Then again, I might find out things about my bank that would scare the bejesus out of me.

Tuesday, August 7, 2012

Should teachers know students have criminal backgrounds?

Originally posted 06/16/2011 on lubbockonline.com

Megan Ryan of the Houston Chronicle reports that a bill requiring teachers to be informed when a student has a criminal history is sitting on Gov. Perry's desk waiting to be signed. The goal is greater safety for teachers and for other students. I'm torn on this one. Juveniles are generally protected from exposure because there are a lot of crazy, dangerous and even violent things done by minors who straighten up and become model citizens. When you know someone has a history, there is a tendency to treat them different because of that history. So keeping the students criminal history secret makes it possible for them to be treated like any other kid instead of as a menace to society. But if the student has a violent history, don't the teachers have a right, even a need, to know so they can better protect themselves and the other students? Texas State Teachers Association spokesman Clay Robinson believes they do and that the information will make it possible for teachers to avoid dangerous situations: "If the kid needed help after class, you could call a security guard to stay with you or stand out in the hall," he said. "If you were walking to your car and you saw the kid lurking about, you might want to ask a security guard or another teacher to walk you to the car." How many students with criminal backgrounds get in altercations with school staff? How does that number compare with the number of students without criminal backgrounds that get in altercations with school staff? Is there enough difference in the numbers to warrant exposing students to fear and suspicion from teachers?

National ID push is back

Originally posted 06/15/2011 on lubbockonline.com

They're doing it again. Pushing for a national ID card. I've looked at the issue before and I'm sure I will again. It's an issue that won't go away. Recently a Ron Paul video was put up on youtube, though it's not dated beyond the upload date. On June 10th Bob Barr of the Atlanta Journal Constitution commented on the new push to use E-Verify to implement a defacto national ID based on another post at the Cato institute by Jim Harper. E-Verify is a national program designed to decrease hiring of illgal aliens. Participation is voluntary, but there is a push to make compliance mandatory for all employers. That would effectively create a cardless national ID system for workers.

'

Epic has filed a brief with the Department of Homeland Security (DHS) and posted it at JDSupra opposing the expansion of E-Verify. It notes that despite legal limits imposed on E-Verify, the DHS refuses to limit it to employment records:

First, the SORN claims E-Verify data "may also be used for law enforcement," followed by specified examples in parentheses, "(to prevent fraud and misuse of E-Verify, and to prevent discrimination and identity theft)." 36 It is important to note that the agency fails explicitly to commit to these parenthetical examples as legal limitations. Second, the agency seeks unfettered power to distribute E-Verify records both to public and private parties.

Before E-Verify has been expanded, the DHS is already trying to expand the uses of it beyond the limits imposed by law. We cannot trust government agencies with our personal, identifying data. The risk of abuse is too great.

Privacy is about trust and control

Originally posted 06/13/2011 on lubbockonline.com

In a guest blog post on Security Catalyst in 2009 Aaron Titus explained the importance of privacy in a world that equates privacy concerns with illegal activity. He focuses specifically on a phrase that pops up quite a bit when talking about privacy, "If you have nothing to hide, why worry about privacy." I've seen that phrase or some variant thousands of times. It sounds reasonable, but it's not. As Aaron points out, it misses the point entirely:

Baloney. I have everything to hide! When someone says, “I have nothing to hide,” it’s simply not true. What he really means is, “I have nothing to be ashamed of,” which may be true. But shame is only one, limited reason for confidentiality. Confidentiality is not an admission of guilt. I have much to hide, for one simple reason. I cannot trust people to act reasonably or responsibly when they are in possession of certain facts about me, even if I am not ashamed of those facts. For example, I keep my social security number private from a would-be criminal, because I can’t trust that he’ll act responsibly with the information. I’m certainly not ashamed of my SSN. Studies have shown that cancer patients loose their jobs at five times the rate of other employees, and employers tend to overestimate cancer patients’ fatigue. Cancer patients need privacy to avoid unreasonable and irresponsible employment decisions. Cancer patients aren’t ashamed of their medical status—they just need to keep their jobs.

Trust is a major reason we need to be able to keep some things private. But it's not the only one. Another reason I hear there's no need to worry about privacy is that it's already too late, we have no privacy anyway. It may feel that way, but it's not true. It is true that a lot of our information is out there, but far from all of it. We need to protect the rest, and start getting back control of what is already out.

That is another element of privacy, control. Keeping control of your information. We should be able to decide who can gather our information and what they can do with it. Facebook apps are a perfect example of this - though not the only one. To use most apps on Facebook you have to allow them to access not just your information, but your friends as well. And you have no say in how they use any of that information. So to use an app, even one that just allows you to post an interesting article on your wall with a click, you have to give up your friends information. Even if they've set their privacy settings so that only friends and family can see their pages the app gets access to everything because you used a simple way to share information on Facebook. They shouldn't be able to require you to turn over your friends, and you shouldn't be able to even if you want to.

When groups like the Electronic Frontier Foundation and the Electronic Privacy Information Center fight for privacy, they aren't fighting for the right to commit crimes, but the right to keep private information that is nobody's business but yours. I'm glad there are people and groups with the desire and the resources to fight that fight, whether or not we realize it needs fighting.

Tuesday, March 27, 2012

Security vs Privacy: It's not what you think it is, part 2"

Originally published 06/06/2011 on lubbockonline.com

Last week I told you about Daniel J. Solov, the author of "Nothing to Hide: The False Tradeoff Between Privacy and Security" and his article, "Why 'security' keeps winning out over privacy," on salon.com about the bogus reasons security trumps privacy every time the two come into conflict. We looked at the first two mistaken arguments in his article last Wednesday.Today we'll look at the last three. Eventually we may look at the wider list of faulty security vs privacy arguments, but these will do for now.

The next argument we will look at is the "Pendulum argument." This is the idea that in times of heightened risk we should blindly allow privacy concerns to fall to the wayside because when things calm down the pendulum will swing the other way and privacy will be reinstated. The problem is, when risk is low, there isn't a great deal of demand to violate privacy for security, so the need to protect privacy isn't as great. In times of heightened risk the desire to be safe makes us less likely to question measures that supposedly increase our protection. So we get measures that sound good, but really do little. Solove mentions the Japanese interment in World War II and the "Red Scare" of the Mcarthy Era." Our problem is a little more hidden, though no more subtle. The ongoing monitoring of as close to every landline phone in the U.S. as possible (that's pretty close). In 2003 the Census Bureau gave the Department of Homeland Security the cities and zip codes of Arab Americans - supposedly to help decide what airports needed signs in Arabic. The implementation of full body scanners and gropedowns at airports to prevent bombings. These are just a few examples of actions taken to improve security that did little for security but cut deeply into privacy and liberty.

The War Powers argument looks good at first glance. It's the job of the President to lead our nations in time of war, and nothing should hamper his ability to do that. The NSA wiretapping is justified because, even though it violates the Foriegn Intelligence Surveillance Act (FISA), the the Presidents ability to lead our country in times of war is more important than any law. The implication is that there is nothing the president can't do if we are at war. He can put citizens in concentration camps, ignore Consititutionally garaunteed rights, and have people pulled from their houses and shot without explanation if he is doing it under "War Powers."

Last, we have the Luddite Argument. The Luddite argument says that if you're not willing to embrace technology you're holding security back through fear and ignorance. But the truth is, often these technologies haven't been vetted properly, and may not be ready for prime time. The anti-bomb "puffers" put into service to protect us from bombs are a prime example. But Mr. Solove's example of biometrics is a very good and timely example:

 

To see the problems with the Luddite argument, let’s look at biometrics. Biometric identification allows people to be identified by their physical characteristics -- fingerprint, eye pattern, voice and so on. The technology has a lot of promise, but there is a problem, one I call the "Titanic phenomenon." The Titanic was thought to be unsinkable, so it lacked adequate lifeboats. If biometric data ever got lost, we could be in a Titanic-like situation -- people’s permanent physical characteristics could be in the hands of criminals, and people could never reclaim their identities. Biometric identification depends on information about people’s characteristics being stored in a database. And we hear case after case of businesses and government agencies that suffer data security breaches.

 

He goes on to point out that if someone steals your SS# you can replace it. Making sure you understand all the implications of using biometrics before ditching our current system is wisdom, not ludditism.

There are a number of arguments used to 'prove' security is more important that privacy, and that privacy is a danger to security. The truth is that there are few situations where privacy and liberty are incompatible with security. But to some government officials and law enforcement the idea of privacy is synonomous with chaos. We can't let them have the last word on security and privacy policies.

Thursday, March 22, 2012

Security vs privacy - it's not what you think it is.

Originally published 06/01/2011 on lubbockonline.com

Daniel J. Solove is the author of "Nothing to Hide: The False Tradeoff Between Privacy and Security." Yesterday (May 31,2011) he published an article, "Why 'security' keeps winning out over privacy," on salon.com about the bogus reasons security trumps privacy every time the two come into conflict.

According to Solove, the arguments used to win security over privacy are flawed, and he examines a few of those arguments to show how. We'll look at a couple today and a couple tomorrow:

  • The all or nothing fallacy - this fallacy says that you have to go all the way, or do nothing at all. He uses the example of surveillance, "In polls, people are asked whether the government should conduct surveillance if it will help in catching terrorists." Of course people say yes, the question implys that we are unprotected and saying "no" means leaving ourselves exposed to terrorist attack. The government already has the right to conduct surveillance, but must follow certain rules. As Solove puts it:

     

    We shouldn’t ask: "Do you want the government to engage in surveillance?" Instead, we should ask: "Do you want the government to engage in surveillance without a warrant or probable cause?"

     

    The former question pretends protecting privacy requires a complete loss of security. We weren't without protection before 9/11, and the protection isn't that much better now despite the increased "security" forced upon us.

  • The deference argument says courts should defer to the executive branch in security matters. But it is the courts job to be a check on the executive, examining what the executive does and making sure that it does to secure us is actually worth the trade-off. A simple, basic example is the TSA body scanners and patdowns. Will they even stop a slightly determined bomber? Probably not. A policy of deference by the courts means that our civil liberties are trampled on for no reason and the people who are supposed to gaurd them are looking the other way.

That's just two of the many arguments used to trump privacy for security. Tomorrow we'll look at two more.

The bad guys are phoning for access

Originally posted 05/31/2011 on lubbockonline.com

Jason Halstead of the Winnipeg Sun reports that a woman in Winnipeg, Canada was almost a victim of an unusual blended attack on her computer.

61-year-old Val Christopherson answered her phone and a man told her he was from an online security company that was receiving error messages from her computer. He claimed to want to fix her problem over the phone and convinced her to go to a site called Teamviewer.com and let him connect to her computer. Then he tried to sell her antivirus software and let him install it. That was when she got suspicious and hung up.

Ms. Christopherson was smart. When the man called back she hung up on him again, then unplugged her computer and contacted her ISP and bank to reset her security credentials and let them know her computer might have been breached. Letting herself be talked into letting an unknown person to connect remotely to her computer was a lapse, but perhaps an understandable one. As often as we warn against clicking on strange links and ok'ing popups, we never warn about letting strangers access your computer, either in person or remotely. A computer attack initiated by calling the prospective victim is, in the case of private individuals, extremely rare, so no one warns about that type of attack.

So if you get a phone call from someone asking you to give them access to your computer, tell them no. If they are from your ISP or the company you get your anti-virus from, tell them you'll call them back and hang up. Then use the number from the phonebook or the internet to call them and find out if they had been trying to contact you. Don't ever trust an anonymous phone caller with access to your computer.

Monday, March 19, 2012

Originally posted 05/24/2011 on lubbockonline.com

Rebecca Boyle at Popsci.com reports that the federal government will announce next month that all cars must have a black box. Yes, a black box like airplanes carry.

Rebecca tells us that a lot of cars already do, but there is no standard for how they work, what they record, or how to access it. That may be a good thing, since even now they record more than we probably realize. According to the article:

General Motors can find out plenty of information about your driving habits, as Autopia explains, like whether you used your turn signal and whether you buckled your seat belt. GM can use this information to build better safety systems, but it can conceivably be used by insurance companies, too, when determining how to pay claims or assign fault. Or it could be used by legal authorities to prove guilt or negligence.

I'm not really sure what I think about this. Most of the objections I have are already applicable to cell phones. Speaking of cell phones, she also points out that in the future the black box may record if you were distracted by your cell phone right before an accident. I don't know why she assumes that is a future development, especially if you have a car with bluetooth for connecting to your cell phone.

The one concern I have for this the potential for tracking abuse. We already have agencies trying to put trackers on cars without a warrant. How much harder would it be to protect us from unwarranted tracking if the ability was built into the car? Who knows what kind of tracking is already being done on cars with Onstar and similar systems?

Fingerprinting from a distance

Originally published 05/23/2011 on lubbockonline.com

Last January Sandra Swanson reported at Technology Review that Advanced Optical Systems (AOS) has designed a fingerprint reader that works from up to six feet away. This could be a big boon to security, and a great boost to identity theft, too. The techology as reported wasn't quite ready for either duty, but was expected to be by sometime in April and is called the "Airprint".

I tried to find more recent reports, but the only article I could find was an article in the May INC. that was so short and generic it was probably taken from the original Technology Review article. I tried to go to the AOS website, but it wasn't responding to queries. So I don't know if the company is still around, although I expect it is. AOS has been arounnd since 1988 and there were no reports if it shutting down or going bankrupt.

The Airprint isn't quite ready for prime time ID theft, being two large to easily conceal. But that will change. The units will get smaller, scan faster, and scan farther. Change the 'light source' to infra-red or ultra-violet and the chance of being detected by the victim is very low. A commenter on the original story noted that the technique would probably work for the raised characters on credit cards, too. Fortunately, my primary card is entirely printed, so it's safe from this scanner. But if someone can get one or more fingerprints and you credit card info, they've got access to a lot more than I'd want them to have.

Sunday, March 18, 2012

Did you miss Playstation Network?

Originally posted 05/16/2011 on lubbockonline.com

I've been remiss in not reporting the Sony Playstation Network breach and outage. The network started going back up this weekend. Ok, yesterday.

You might wonder why I would seemingly ignore one of the largest data breaches ever. Part of it was waiting to see what came out. Part of it was that if you were active on the PSN you were probably already more aware of the situation and following it closer than I had time to. But now there's more information, and I might actually be able to tell you a few things you don't know about a breach like this. Sadly, it won't be good news

Joshua Grech of the Daily Telegraph reports that the PSN started coming back up sometime Sunday, although it may take a few days for everything to be available again. He also reports that Sony is going to offer a "Welcome Back" package of software and content to encourage people to stay with Sony and Playstation (or come back if they've bought an Xbox during the outage). As part of the increased security in the system users will have to change passwords when they log back in, and will have to prove they are the account holder to do it. When announcing the return to service Sony Group CEO Kazuo Hirai had one of the best non-apologies I've seen:

"I wish I could tell you that technology is available to completely protect any company against cyber attack. "But unfortunately the threat of cyber crime and data theft will continue to plague networks, companies, government agencies and consumers around the world for some time to come."

Translation: "Sorry, people. It's not our fault. We can't prevent it and neither can anybody else, now and forever."

It's true that there is no perfect protection against bad guys, online or in the real world, the disturbing thing is how hard it is to track a truly skilled attacker online. Bianca Bosker at the Huffingtong post looks at just how hard it can be. A truly skilled attacker will use botnets, spoofed IP addresses and spoofed MAC addresses as well as multiple hops through computers - some under the control of the attacker, some not, but all used to obscure the origin of the attack.

When a breach is discovered there are steps taken to find out what happened. What those steps are varies from company to company, but one of the first is to check the system logs:

Once a company discovers its network has been breached, investigators will usually first comb the server’s log files, which record all traffic to and from the server including attempts to access the network or extract information from it. Reviewing these records -- the digital equivalent of watching security camera footage -- offers a look at any suspicious communication with a company’s network and where it may have originated.

Unfortunately, though logs are one of the best tools for seeing what happened on the server, skillful attackers can easily negate them by editing all evidence of their activities out. By doing that they could keep an attack from being noticed for weeks, months, or even years. Unlike theft in the real world, theft online leaves the original on the server. Removing the logs entirely would tip off the systems administrators that something happened. Editing the logs removes the evidence that something unusual has happened while leaving all records of normal activity in place.

Sony has gotten a lot of bad publicity for having the PSN down for so long, and are being sued for the breach. We don't know what kind security they had in place. I would be tempted to say that it obviously wasn't adequate, but the truth is, no one has adequte security. Sony's real failure was in the handling of the breach. Instead of being open and informative they were secretive and withheld important information in the hopes of controlling the damage. They did the same thing when they loaded rootkits on CD's, and I imagine they'll do the same thing the next time they have an event like this. Because breaches like this will happen it makes no sense to hide when they happen. The thing to do is have policies and procedures in place that cover breaches and provide for the rapid dissemination of information to the people affected, law enforcement and the media. That doesn't mean to tell everything, but each of those groups should receive the appropriate information.

Every company should have every possible protection in place, but must admit that they are not immune to breaches and prepare for that eventuality. It's the only responsible thing for them to do.

Will Facebook ever get privacy right?

Originally published 05/12/2011 on lubbockonline.com

Nishant Yoshi reported on Symantec's official blog that third party Facebook applications have had accidental access to much more of Facebook users info and pages than anyone knew:

Third parties, in particular advertisers, have accidentally had access to Facebook users’ accounts including profiles, photographs, chat, and also had the ability to post messages and mine personal information. Fortunately, these third-parties may not have realized their ability to access this information. We have reported this issue to Facebook, who has taken corrective action to help eliminate this issue.

Symantec's researchers estimate that over 100,000 apps may be leaking data. Over 600,000,000 people have Facebook accounts. Because of an oversight, 100,000 third parties, both known and unknown, may have had access to their information, no matter how tightly they had controlled the privacy settings. The only saving grace of this news is that few, if any, of those third parties may have realized the treasure they were sitting on.

Facebook has to start taking privacy more seriously. But they never will if users don't demand it because the Facebook business model is to get as many users as possible and encourage them to put as much data as possible, as openly as possible, on the site so Facebook can sell access to it. As it turns out, Facebook had actually given away the keys to the kingdom, but fortunately, nobody seems to have noticed.

Congress says, "protect customers", Justice Department says "spy on them."

Originally posted 05/11/2011 on lubbockonline.com

Today Apple and Google execs appeared before congress to answer questions about the way their operating systems gather user data. Darrell Etherington of the Gigaom column at Businessweek reports that Senator Al Franken assured everyone that the purpose of the hearing was not to bring an end to location services, but to move forward while protecting customers.

While Senator Franken was working to protect consumers from overreaching data collection by cell phone makers, the Justice Department was arguing for laws requiring cell phone providers to collect more data on their customers. Declan Mcullagh reported in his Privacy Inc. blog that Jason Weinstein, the deputy assistant attorney general for the criminal division testified on the need for cell phones to collect and retain data to make it easier for law enforcement to gather evidence:

"Many wireless providers do not retain records that would enable law enforcement to identify a suspect's smartphone based on the IP addresses collected by Web sites that the suspect visited," he added.

Really? They won't be able to identify a persons smart phone if they can't use the IP address assigned to it? I know some criminals will avoid putting any identifying information on the phone if they can, but really. The only way to identify a smart phone belongs to someone is by knowing it's IP and the web sites it visited. It makes you wonder how they were ever abe to solve crimes back in the dark ages when there were no smart phones with IP addresses and web sites to record them.

This is a typical overreach. The idea that government can require gathering data on everyone because there are a few instances the data may help in a criminal case goes against the spirit of the 4th Amendment and the idea of innocent until proven guilty. Why is a cell phone any different than the information in my home? They can't go into everyone's home and gather data to make it easier to solve criminal cases. Why should they be allowed to go into my phone? And why should they be able to gather data, or have someone else gather data from my phone without any evidence I've committed a crime? They shouldn't. That's what the Bill of Rights was written to protect us from.

Saturday, March 17, 2012

Is "Do not track" legislation a good idea?

Originally posted 05/09/2011 on lubbockonline.com

Cecilia Kang reports on SFGate.com that more senators are jumping on the "Do not Track" legislation band wagon. In general this is a good thing. Companies should not be able to follow your travels across the web, store the data and analyze it until they know your likes and dislikes better than you do. People need to be given control of their own data, and legislation is one way to do it.

But is it the best way? Good intentions (late as they are) aside, rushing a bill through to protect consumers could be as bad as doing nothing. Maybe worse. Suddenly requiring companies to stop tracking and/or destroy the tracking data they've gathered would put many companies out of business - online anyway. So much of the free web we enjoy is paid for by the data gathered by tracking that removing that revenue cold turkey would hit the web like an earthquake.

Web browsers are starting to have "do not track" features built in. Allowing users the choice on whether or not to be tracked is a good thing, and would have a more gradual impact on web businesses. I wonder how many congressmen really understand how integral to the web experience we all know and love tracking is? It pays for many of our favorite websites (Facebook, anyone?). Take it away without granting time to find a new revenue model and they will have to charge. Steve DelBianco, executive director of Netchoice has a plan for bringing that point home to one Senator. I think he should implement it for all of them:

"I've asked for a waiver of Senate ethics rules so I can give Sen. Rockefeller a gift he really needs - an iPad. The senator can see for himself how interest tracking lets advertisers pay for all those free apps and Web services that regular Americans love to use."

How many of the sites you have a free account with would you be willing to pay for? How will they make money without tracking or making members pay? Those are the real questions. And until someone has a good answer, blanket "do not track" legislation is a bad idea.