Wednesday, March 28, 2012

Is protecting businesses deposits the banks responsibility?

Originally published on 06/08/2011 on reports that a court magistrate has recommended that a motion for jury trial be denied in the case of Patco Construction Company, Inc v People's United Bank dba as Ocean Bank. This is another case that demonstrates the lower level of protection afforded business bank accounts than consumer accounts. If you never plan on opening a business that may not seem important to you, but if the company you're working for goes under because it's bank provided inadequate protection to business accounts it will be come very important

This case is similar in some ways to the case of Plains Capital v Hillary, which I followed pretty closely at the time. But there are differences. As far as I know no one ever figured out how the bad guys got the credentials from Hillary Machinery. The ZeuS trojan was allegedly the source of the compromise at Patco, and that is actually good evidence that the banks security was inadequate.

But the magistrate doesn't disagree that the security was inadequate. What he does disagree with is that it was the banks responsibility to have better security to protect it's customers data. He believes that the bank provided multi-factor authentication as recommended by the banking industry. Let's take a quick look at that. Multi-factor authentication is usually considered to require at least two out of three factors:

  • Something you know (like a password)
  • Something you have (like a cryptocard)
  • Something you are (like a fingerprint)

Krebs on Security reported on the recommended decision and noted that Patco tried to instruct the court on the state of multi-factor authentication today, but with little or no luck. They informed the court that trojans like ZeuS can negate the benefit of cryptocards, but apparently that was not good enough for the magistrate. Unless Patco appeals a decision based on his recommendation they will be out $300,000 that couldn't be recovered.

Consumers have a lot of legal protections when it comes to their money in the bank. Large businesses sheer size is their protection. But small and medium sized businesses have little or no legal protection. The Federal Financial Institutions Examination Council (FFEIC) was about to release updated guidelines last year, but didn't. It's criminal that there is no better guidance for banks on protecting ALL customers money, and something needs to be done. The someone doing it should not be the courts.

RIAA pushes anti "login sharing" legislation in Tennessee, hopes others follow

Originally published on 06/06/2011 on lubbockonline.com

Sheila Burke and Lucas L. Johnson II report in the Tennessean that Tennessee has passed legislation making it illegal to share logins to "entertainment subscription services." The article focuses on Netflix logins, but it covers any kind of entertainment subscription login - whatever that means.

According to the article Recording Industry Association of America (RIAA) pushed for this bill, which makes the big focus of all the news stories - Netflix - all the stranger. The RIAA's concern would primarily be music services like Rhapsody, but the law does cover anything that could be called an "entertainment subscription service." Netflix qualifies, and sharing of logins in college dorms and by 'services' selling logins could be a problem. But is this really a big problem, or is this another case of an industry with a failing old business model looking for any excuse to explain it's problems other than the fact that old business models are changing, and businesses that won't change will fail?

The question is, how long will they be able to push legislation to prop their old, failing business model up, and how much damage will they do in that time?

Tuesday, March 27, 2012

Security vs Privacy: It's not what you think it is, part 2"

Originally published 06/06/2011 on lubbockonline.com

Last week I told you about Daniel J. Solov, the author of "Nothing to Hide: The False Tradeoff Between Privacy and Security" and his article, "Why 'security' keeps winning out over privacy," on salon.com about the bogus reasons security trumps privacy every time the two come into conflict. We looked at the first two mistaken arguments in his article last Wednesday.Today we'll look at the last three. Eventually we may look at the wider list of faulty security vs privacy arguments, but these will do for now.

The next argument we will look at is the "Pendulum argument." This is the idea that in times of heightened risk we should blindly allow privacy concerns to fall to the wayside because when things calm down the pendulum will swing the other way and privacy will be reinstated. The problem is, when risk is low, there isn't a great deal of demand to violate privacy for security, so the need to protect privacy isn't as great. In times of heightened risk the desire to be safe makes us less likely to question measures that supposedly increase our protection. So we get measures that sound good, but really do little. Solove mentions the Japanese interment in World War II and the "Red Scare" of the Mcarthy Era." Our problem is a little more hidden, though no more subtle. The ongoing monitoring of as close to every landline phone in the U.S. as possible (that's pretty close). In 2003 the Census Bureau gave the Department of Homeland Security the cities and zip codes of Arab Americans - supposedly to help decide what airports needed signs in Arabic. The implementation of full body scanners and gropedowns at airports to prevent bombings. These are just a few examples of actions taken to improve security that did little for security but cut deeply into privacy and liberty.

The War Powers argument looks good at first glance. It's the job of the President to lead our nations in time of war, and nothing should hamper his ability to do that. The NSA wiretapping is justified because, even though it violates the Foriegn Intelligence Surveillance Act (FISA), the the Presidents ability to lead our country in times of war is more important than any law. The implication is that there is nothing the president can't do if we are at war. He can put citizens in concentration camps, ignore Consititutionally garaunteed rights, and have people pulled from their houses and shot without explanation if he is doing it under "War Powers."

Last, we have the Luddite Argument. The Luddite argument says that if you're not willing to embrace technology you're holding security back through fear and ignorance. But the truth is, often these technologies haven't been vetted properly, and may not be ready for prime time. The anti-bomb "puffers" put into service to protect us from bombs are a prime example. But Mr. Solove's example of biometrics is a very good and timely example:

 

To see the problems with the Luddite argument, let’s look at biometrics. Biometric identification allows people to be identified by their physical characteristics -- fingerprint, eye pattern, voice and so on. The technology has a lot of promise, but there is a problem, one I call the "Titanic phenomenon." The Titanic was thought to be unsinkable, so it lacked adequate lifeboats. If biometric data ever got lost, we could be in a Titanic-like situation -- people’s permanent physical characteristics could be in the hands of criminals, and people could never reclaim their identities. Biometric identification depends on information about people’s characteristics being stored in a database. And we hear case after case of businesses and government agencies that suffer data security breaches.

 

He goes on to point out that if someone steals your SS# you can replace it. Making sure you understand all the implications of using biometrics before ditching our current system is wisdom, not ludditism.

There are a number of arguments used to 'prove' security is more important that privacy, and that privacy is a danger to security. The truth is that there are few situations where privacy and liberty are incompatible with security. But to some government officials and law enforcement the idea of privacy is synonomous with chaos. We can't let them have the last word on security and privacy policies.

Facebook Friday

Originally published 06/03/2011 on lubbockonline.com

Poachers shouldn't advertise

An AP story story in the Miami Herald tells us of the perils of being a poacher on Facebook. Darin Lee Waldo learned the hard way that Facebook is not the place to brag about your poaching. Or the anywhere on the internet. Especially when the people you're trading photos with are investigators with the Florida Fish and Wildlife Commission.

Of course, Darin wasn't just bragging about his poaching. He was a felon in illegal possession of a firearm bragging about his poaching on Facebook and inviting government agents on poaching trips. I'm sure he came from a long line of criminal masterminds.

Fireman suing for Facebook firing

Capecodtoday.com reports that former fireman Richard Doherty is suing the town of Bourne, MA for wrongful firing. The city says he broke department policy by posting derogatory remarks about members of the fire and police departments and the leadership. Doherty says he was just doing normal venting like employees do. He may have just been venting, but venting on Facebook isn't like venting to your wife, or with your buddies, as Mr. Doherty found out.

Facebook can be fun, it can be useful, but it can also be the knife you hold to your own throat. Be careful what you do with it.

Saturday, March 24, 2012

New Mac Malware on Facebook, New Mac Defender bypasses Apple fix

Originally posted 06/02/2011 on lubbockonline.com

It's been a busy couple of days in the malware world.

New Mac and PC malware reported on Facebook

F-Secure reported "a significant malware" affecting both Mac's and PC's circulating on Facebook, then reported that Facebook finally blocked it. I'm not sure how significant it really was - by the time I checked the Openbook link in F-Secures initial post there were only two examples of the bogus links popping up, and the good folks at F-Secure couldn't manage to get infected by it even though they were trying. But if you should see messages or updates with the following subjects, don't click on the links:

 

 

At 17:00 GMT the attack changed subject line to:

one more stolen home porn video ;) Rihanna and Hayden Panettiere and…

Rihanna And Hayden Panettiere !!! Private Lesbian HOT Sex Tape stolen from home archive of Rihanna! Hot Lesbian Video - Rihanna And Hayden Panettiere !!

 

Apple in escalating war with Mac Defender?

On Tuesday, 05-31-11 Apple released Security Update 2011-003 for Mac OS X 10.6.7 and Mac OS X 10.6.7 Server. The update warns users when they download a known variant of Mac Defender and scrubs the malware from systems that have already been infected. It also has a daily update function to download definitions of new Mac Defender variants (and presumably other malware that may pop up).

It's a good thing Apple had the foresight to make their fix upgradeable. On Wednesday, 06-01-11 a new variant of Mac Defender that bypasses the Apple fix appeared. I'm sure that by the time you read this, or no later than Friday, 06-03-11 an update will take care of the new variant, and a day or so later a 'fixed' Mac Defender will appear to bypass Apple's update. And so on, and so on, and so on. That's not a knock on Apple, it's just the way these things work. The attacked company, in this case Apple, cannot ignore the malware, and the malware authors aren't going to let Apple beat them. Not for a while, anyway.

I'm glad Apple has built a fix for the latest version of OS X, but I wonder if Mac Defender runs on earlier versions. Not just earlier versions of Snow Leopard, but Leopard and Tiger, too. There are a lot of people still using them, but Apple's just leaving them in the cold. Hopefully Apple will release a version for Leopard, at least.

Thursday, March 22, 2012

Security vs privacy - it's not what you think it is.

Originally published 06/01/2011 on lubbockonline.com

Daniel J. Solove is the author of "Nothing to Hide: The False Tradeoff Between Privacy and Security." Yesterday (May 31,2011) he published an article, "Why 'security' keeps winning out over privacy," on salon.com about the bogus reasons security trumps privacy every time the two come into conflict.

According to Solove, the arguments used to win security over privacy are flawed, and he examines a few of those arguments to show how. We'll look at a couple today and a couple tomorrow:

  • The all or nothing fallacy - this fallacy says that you have to go all the way, or do nothing at all. He uses the example of surveillance, "In polls, people are asked whether the government should conduct surveillance if it will help in catching terrorists." Of course people say yes, the question implys that we are unprotected and saying "no" means leaving ourselves exposed to terrorist attack. The government already has the right to conduct surveillance, but must follow certain rules. As Solove puts it:

     

    We shouldn’t ask: "Do you want the government to engage in surveillance?" Instead, we should ask: "Do you want the government to engage in surveillance without a warrant or probable cause?"

     

    The former question pretends protecting privacy requires a complete loss of security. We weren't without protection before 9/11, and the protection isn't that much better now despite the increased "security" forced upon us.

  • The deference argument says courts should defer to the executive branch in security matters. But it is the courts job to be a check on the executive, examining what the executive does and making sure that it does to secure us is actually worth the trade-off. A simple, basic example is the TSA body scanners and patdowns. Will they even stop a slightly determined bomber? Probably not. A policy of deference by the courts means that our civil liberties are trampled on for no reason and the people who are supposed to gaurd them are looking the other way.

That's just two of the many arguments used to trump privacy for security. Tomorrow we'll look at two more.

The bad guys are phoning for access

Originally posted 05/31/2011 on lubbockonline.com

Jason Halstead of the Winnipeg Sun reports that a woman in Winnipeg, Canada was almost a victim of an unusual blended attack on her computer.

61-year-old Val Christopherson answered her phone and a man told her he was from an online security company that was receiving error messages from her computer. He claimed to want to fix her problem over the phone and convinced her to go to a site called Teamviewer.com and let him connect to her computer. Then he tried to sell her antivirus software and let him install it. That was when she got suspicious and hung up.

Ms. Christopherson was smart. When the man called back she hung up on him again, then unplugged her computer and contacted her ISP and bank to reset her security credentials and let them know her computer might have been breached. Letting herself be talked into letting an unknown person to connect remotely to her computer was a lapse, but perhaps an understandable one. As often as we warn against clicking on strange links and ok'ing popups, we never warn about letting strangers access your computer, either in person or remotely. A computer attack initiated by calling the prospective victim is, in the case of private individuals, extremely rare, so no one warns about that type of attack.

So if you get a phone call from someone asking you to give them access to your computer, tell them no. If they are from your ISP or the company you get your anti-virus from, tell them you'll call them back and hang up. Then use the number from the phonebook or the internet to call them and find out if they had been trying to contact you. Don't ever trust an anonymous phone caller with access to your computer.

Happy Memorial day

Originally published 05/30/2011 on lubbockonline.com

Let's all remember between the barbecues, kite flying and trips to the lake to spend a few moments to honor the memory of those who have given everything so that we can enjoy the freedoms we have. And tomorrow let's continue to honor them by making sure those freedoms, so ably defended with the sword, are not lost by the pen.

Tuesday, March 20, 2012

Facebook Friday: Sex offender busted for surfing at Apple store

Originally published 05/27/2011 on lubbockonline.com

Bob Cuddy of the San Luis Obispo Tribune reports that a known sex offender, Robert Nicholis McGuire was arrested at the San Luis Obispo Apple Store for violating his probation. In a perfect example of going to the wrong place at the right time, Mr. McGuire was recognized by sherrif deputies as he went into the Apple Store. He proceeded to log into Facebook on a display computer. A deputy went to the computer next to McGuire's. According to the SLO Sherrif's department press release:

San Luis Obispo County Sheriff's detectives, including the Sexual Assault Felony Enforcement (SAFE) team spotted a known sex offender in downtown San Luis Obispo on Wed afternoon. One of the SAFE team detectives recognized the man from a previous child pornography case. As one detective followed the man, another checked the probation terms of the registered sex offender. They followed the man to the Apple store on Higuera St. where he entered and began to log on to the internet from a display computer. Another detective went to the computer next to the man and logged on to the Megan’s law website. At about the same time the probation term information was received that clearly indicated McGuire was prohibited from using the internet. McGuire had logged on to his Facebook page. McGuire was taken into custody without incident after he left the store. McGuire made a statement to detectives that he thought he was being followed after the man standing next to him logged onto the Megan's Law site. McGuire is being held without bail at the San Luis Obispo County Jail.

Obviously Mr. McGuire is a "mind your own business" kind of guy. Otherwise he would have noticed someone logging onto the California Megan's Law sex offender tracking website on the computer next to him. He would have noticed that it was the private law enforcement version with full info about sex offenders, not the limited info public version. He probably would have not opened a web browser or closed it if it was open. But he didn't notice, and he did open a web browser and log onto Facebook, and now one more predator is off the streets thanks to his own stupidity.

New MacDefender variant doesn't ask for admin password to install

Originally posted 05/26/2011 on lubbockonline.com

If you use Safari, go to Safari-Preferences and select the General tab. Uncheck open safe files option (see image). If you surf the web in your admin account, create a normal user account and start using it. There is a new variant of Mac Defender that doesn't require an admin password to install if you are logged into an admin account. If you wind up at one of the bogus download sites, are logged in as admin and have "Open Safe Files" selected, it will install without asking your permission. Most people in the Mac community still use the default account setup when they first started their Mac. That is an admin account.

Photobucket

MacGuard is still a relatively low risk piece of Malware. Intego is rating it as a medium threat, but it's hard to say if that's an over or underestimate. It is a step up the threat scale from MacDefender. It won't just affect naive users who say ok to any dialog that pops up. No dialog will pop up to ok.

It might be too early to say that if you run a Mac you need to run anti-virus, but if you're starting to get antsi about it, Sophos' free version of it's Mac anti-virus protects against Mac Defender and I'm sure will be quickly updated to protect against MacGuard. And there are always the paid version from Sophos as well as Symantec, Avast, and others.

This is not the end of the Mac experience as we know it, but it is the end of telling people there is no malware on the Mac. The good news for now is, all you have to do protect yourself is do your everyday computing in a non-admin account and make sure you know what it is you're okaying before you click the blue button. And turning off the "open safe files" option in Safari wouldn't hurt.

Monday, March 19, 2012

The velvet glove of martial law?

On Friday, March 16th President Obama continued the breakdown of the Bill of Rights started with the Patriot Act in 2001. That is the opinion of much of the Internet, and a quick read of Executive Order -- National Defense Resources Preparedness seems to confirm it. But such things seldom occur in a vacuum, and this is no exception. The text is basically an update of executive orders going back to 1950, including EO's by Bill Clinton and George W. Bush.

I haven't read the full order, but I've read enough of it to know that I could easily, if I wanted to, write a long, point by point blog on the potential dangers of each section. The language would be easy to use for fearmongering. But the fact is, to take any one section of the economy under complete government control would require not just an executive order, but a clear and present danger of a type I'm not sure anyone could have been convinced existed even immediately after September 11, 2001. To create true martial law over the entire country would be far more difficult. It could probably be done, but it would probably require more resources than are available without pulling troops home from foreign posts - assuming the troops would go along with it.

The reality is that any competent entity, public or private, will have disaster preparedness documents. That's what this executive order does. It spells out who is in charge of what in the event of natural or manmade disasters. It makes some changes to take recent developments into account, but does very little to significantly change what was done by earlier presidents.

Like any other leader, President Obama has done plenty of things to disagree with and fight about. This just isn't one of them.

Even Apple had to admit it: Mac Defender is real malware for the Mac.

Originally published 05/25/2011 at lubbockonline.com

 

Mac now has real malware. First announced May 2nd by Intego, it's similar to numerous fake anti-virus and anti-malware programs on the Windows side. As far as danger, it's a standard scam to get your credit card number and other identifying information. Unlike some other trojan software it doesn't do anything to your computer or the data on it.

 

Apple spent 3 weeks seemingly ignoring the problem, but on Monday they added a knowledgebase article on avoiding or removing the malware. They are also preparing an OS update that will explicitly warn if a user downloads Mac Defender or one of it's variants. They haven't said what versions of Mac OS will be getting the update, but hopefully they will cover all the affected OS's, not just OS X 10.5 and 10.6.

Warning a user that they're downloading malware is all well and good, but as time goes on and the list of malware grows that could become pretty unwieldy. Hopefully now that there is a piece of malware for OS X that is real, widespread, and effective at what it does Apple will pay more attention to the reality that, like all other software, OS X is not bulletproof and needs serious attention paid to security.

Originally posted 05/24/2011 on lubbockonline.com

Rebecca Boyle at Popsci.com reports that the federal government will announce next month that all cars must have a black box. Yes, a black box like airplanes carry.

Rebecca tells us that a lot of cars already do, but there is no standard for how they work, what they record, or how to access it. That may be a good thing, since even now they record more than we probably realize. According to the article:

General Motors can find out plenty of information about your driving habits, as Autopia explains, like whether you used your turn signal and whether you buckled your seat belt. GM can use this information to build better safety systems, but it can conceivably be used by insurance companies, too, when determining how to pay claims or assign fault. Or it could be used by legal authorities to prove guilt or negligence.

I'm not really sure what I think about this. Most of the objections I have are already applicable to cell phones. Speaking of cell phones, she also points out that in the future the black box may record if you were distracted by your cell phone right before an accident. I don't know why she assumes that is a future development, especially if you have a car with bluetooth for connecting to your cell phone.

The one concern I have for this the potential for tracking abuse. We already have agencies trying to put trackers on cars without a warrant. How much harder would it be to protect us from unwarranted tracking if the ability was built into the car? Who knows what kind of tracking is already being done on cars with Onstar and similar systems?

Fingerprinting from a distance

Originally published 05/23/2011 on lubbockonline.com

Last January Sandra Swanson reported at Technology Review that Advanced Optical Systems (AOS) has designed a fingerprint reader that works from up to six feet away. This could be a big boon to security, and a great boost to identity theft, too. The techology as reported wasn't quite ready for either duty, but was expected to be by sometime in April and is called the "Airprint".

I tried to find more recent reports, but the only article I could find was an article in the May INC. that was so short and generic it was probably taken from the original Technology Review article. I tried to go to the AOS website, but it wasn't responding to queries. So I don't know if the company is still around, although I expect it is. AOS has been arounnd since 1988 and there were no reports if it shutting down or going bankrupt.

The Airprint isn't quite ready for prime time ID theft, being two large to easily conceal. But that will change. The units will get smaller, scan faster, and scan farther. Change the 'light source' to infra-red or ultra-violet and the chance of being detected by the victim is very low. A commenter on the original story noted that the technique would probably work for the raised characters on credit cards, too. Fortunately, my primary card is entirely printed, so it's safe from this scanner. But if someone can get one or more fingerprints and you credit card info, they've got access to a lot more than I'd want them to have.

Sunday, March 18, 2012

US CERT warns against Mississippi disaster scams

Originally posted 05/18/2011 on lubbockonline.com

The U.S.Computer Emergency Readiness Team (CERT) reports that scammers are exploiting the flooding in Mississippi, sending out emails claiming to be from relief organizations. To help us protect ourselves from that, and from some of the other scams floating around the internet they provide links to informative web pages and PDF's:

 

 

Whenever a disaster occurs anywhere the scammers come out in force, but with good information and a little thought you can safely donate online with confidence that your money will be used the way you intended.

*Information from US-CERT

Did you miss Playstation Network?

Originally posted 05/16/2011 on lubbockonline.com

I've been remiss in not reporting the Sony Playstation Network breach and outage. The network started going back up this weekend. Ok, yesterday.

You might wonder why I would seemingly ignore one of the largest data breaches ever. Part of it was waiting to see what came out. Part of it was that if you were active on the PSN you were probably already more aware of the situation and following it closer than I had time to. But now there's more information, and I might actually be able to tell you a few things you don't know about a breach like this. Sadly, it won't be good news

Joshua Grech of the Daily Telegraph reports that the PSN started coming back up sometime Sunday, although it may take a few days for everything to be available again. He also reports that Sony is going to offer a "Welcome Back" package of software and content to encourage people to stay with Sony and Playstation (or come back if they've bought an Xbox during the outage). As part of the increased security in the system users will have to change passwords when they log back in, and will have to prove they are the account holder to do it. When announcing the return to service Sony Group CEO Kazuo Hirai had one of the best non-apologies I've seen:

"I wish I could tell you that technology is available to completely protect any company against cyber attack. "But unfortunately the threat of cyber crime and data theft will continue to plague networks, companies, government agencies and consumers around the world for some time to come."

Translation: "Sorry, people. It's not our fault. We can't prevent it and neither can anybody else, now and forever."

It's true that there is no perfect protection against bad guys, online or in the real world, the disturbing thing is how hard it is to track a truly skilled attacker online. Bianca Bosker at the Huffingtong post looks at just how hard it can be. A truly skilled attacker will use botnets, spoofed IP addresses and spoofed MAC addresses as well as multiple hops through computers - some under the control of the attacker, some not, but all used to obscure the origin of the attack.

When a breach is discovered there are steps taken to find out what happened. What those steps are varies from company to company, but one of the first is to check the system logs:

Once a company discovers its network has been breached, investigators will usually first comb the server’s log files, which record all traffic to and from the server including attempts to access the network or extract information from it. Reviewing these records -- the digital equivalent of watching security camera footage -- offers a look at any suspicious communication with a company’s network and where it may have originated.

Unfortunately, though logs are one of the best tools for seeing what happened on the server, skillful attackers can easily negate them by editing all evidence of their activities out. By doing that they could keep an attack from being noticed for weeks, months, or even years. Unlike theft in the real world, theft online leaves the original on the server. Removing the logs entirely would tip off the systems administrators that something happened. Editing the logs removes the evidence that something unusual has happened while leaving all records of normal activity in place.

Sony has gotten a lot of bad publicity for having the PSN down for so long, and are being sued for the breach. We don't know what kind security they had in place. I would be tempted to say that it obviously wasn't adequate, but the truth is, no one has adequte security. Sony's real failure was in the handling of the breach. Instead of being open and informative they were secretive and withheld important information in the hopes of controlling the damage. They did the same thing when they loaded rootkits on CD's, and I imagine they'll do the same thing the next time they have an event like this. Because breaches like this will happen it makes no sense to hide when they happen. The thing to do is have policies and procedures in place that cover breaches and provide for the rapid dissemination of information to the people affected, law enforcement and the media. That doesn't mean to tell everything, but each of those groups should receive the appropriate information.

Every company should have every possible protection in place, but must admit that they are not immune to breaches and prepare for that eventuality. It's the only responsible thing for them to do.

Is looking at thumbnails of porn grounds for firing?

Originally published 05/13/2011 on lubbockonline.com

David Kravets of Wired's Threat Level blog reports that a high school biology teacher was fired for studying too much of it on school computers.

When I first read the headline and that the teacher had gotten some "not safe for work" (nsfw) images when searching for "blonde" and been fired because of it, I thought he had gotten a raw deal. Then I read more of the story and learned that he was an idiot. Or thought he was clever and outsmarted himself.

Robert Zellner may have counted on claiming retaliation if he was fired. What he didn't count on is monitoring software put on his computer because his computer had unusual problems. So when he disengaged the schools browsing filters, typed "blonde" into google images and spent a little over a minute looking at pornographic thumbnails the school had complete records.

He may have thought that the fact he only skimmed through pages of porngraphic thumbnails without clicking on any links would save him - he could claim he was looking for something else. But the monitoring software showed that he disabled the porn filters, meaning he wanted to find something he knew the filters would block. Two pages of porn thumbnails made it pretty clear what that was.

It really didn't matter what popped up in the search once he turned off the web filter. Disabling it was probably grounds for firing. But when porn popped up and he didn't instantly close the window, he signed his pink slip.

Will Facebook ever get privacy right?

Originally published 05/12/2011 on lubbockonline.com

Nishant Yoshi reported on Symantec's official blog that third party Facebook applications have had accidental access to much more of Facebook users info and pages than anyone knew:

Third parties, in particular advertisers, have accidentally had access to Facebook users’ accounts including profiles, photographs, chat, and also had the ability to post messages and mine personal information. Fortunately, these third-parties may not have realized their ability to access this information. We have reported this issue to Facebook, who has taken corrective action to help eliminate this issue.

Symantec's researchers estimate that over 100,000 apps may be leaking data. Over 600,000,000 people have Facebook accounts. Because of an oversight, 100,000 third parties, both known and unknown, may have had access to their information, no matter how tightly they had controlled the privacy settings. The only saving grace of this news is that few, if any, of those third parties may have realized the treasure they were sitting on.

Facebook has to start taking privacy more seriously. But they never will if users don't demand it because the Facebook business model is to get as many users as possible and encourage them to put as much data as possible, as openly as possible, on the site so Facebook can sell access to it. As it turns out, Facebook had actually given away the keys to the kingdom, but fortunately, nobody seems to have noticed.

Congress says, "protect customers", Justice Department says "spy on them."

Originally posted 05/11/2011 on lubbockonline.com

Today Apple and Google execs appeared before congress to answer questions about the way their operating systems gather user data. Darrell Etherington of the Gigaom column at Businessweek reports that Senator Al Franken assured everyone that the purpose of the hearing was not to bring an end to location services, but to move forward while protecting customers.

While Senator Franken was working to protect consumers from overreaching data collection by cell phone makers, the Justice Department was arguing for laws requiring cell phone providers to collect more data on their customers. Declan Mcullagh reported in his Privacy Inc. blog that Jason Weinstein, the deputy assistant attorney general for the criminal division testified on the need for cell phones to collect and retain data to make it easier for law enforcement to gather evidence:

"Many wireless providers do not retain records that would enable law enforcement to identify a suspect's smartphone based on the IP addresses collected by Web sites that the suspect visited," he added.

Really? They won't be able to identify a persons smart phone if they can't use the IP address assigned to it? I know some criminals will avoid putting any identifying information on the phone if they can, but really. The only way to identify a smart phone belongs to someone is by knowing it's IP and the web sites it visited. It makes you wonder how they were ever abe to solve crimes back in the dark ages when there were no smart phones with IP addresses and web sites to record them.

This is a typical overreach. The idea that government can require gathering data on everyone because there are a few instances the data may help in a criminal case goes against the spirit of the 4th Amendment and the idea of innocent until proven guilty. Why is a cell phone any different than the information in my home? They can't go into everyone's home and gather data to make it easier to solve criminal cases. Why should they be allowed to go into my phone? And why should they be able to gather data, or have someone else gather data from my phone without any evidence I've committed a crime? They shouldn't. That's what the Bill of Rights was written to protect us from.

Saturday, March 17, 2012

Should a judge allow 23,000 "John Doe" subpeona's?

Originally published 05/10/2011 on lubbockonline.com

David Kravets of the Threat Level blog reports that a federal judge is allowing U.S. Copyright Group to file 23,000 "John Doe" subpeonas with ISP's around the country. The number is only likely to grow as they continue to have people go to torrent sites and monitor the IP addresses of people downloading "The Expendables."

It's wrong to steal movies, but the granting of these "John Doe" subpeonas is controversial. It shouldn't be, but it is. Granting the subpeonas ignores jurisdiction - it's very unlikely that most of the 'suspects' are in the area covered by the courts jurisdiction and it treats ignores the fact that they have very little relationship with each other. They are not part of an organized conspiracy to steal intellectual property.

U.S. Copyright Group and other IP enforcement mills, on the other hand, are in the business of finding ways to make the most money possible filing suits for copyright infringement. Getting all of the infringers under one judge means lots of profit. Having to file in the actual jurisdictions would decrease profit. Lawyers profit margins are not good reason to trample defendants rights.

Is "Do not track" legislation a good idea?

Originally posted 05/09/2011 on lubbockonline.com

Cecilia Kang reports on SFGate.com that more senators are jumping on the "Do not Track" legislation band wagon. In general this is a good thing. Companies should not be able to follow your travels across the web, store the data and analyze it until they know your likes and dislikes better than you do. People need to be given control of their own data, and legislation is one way to do it.

But is it the best way? Good intentions (late as they are) aside, rushing a bill through to protect consumers could be as bad as doing nothing. Maybe worse. Suddenly requiring companies to stop tracking and/or destroy the tracking data they've gathered would put many companies out of business - online anyway. So much of the free web we enjoy is paid for by the data gathered by tracking that removing that revenue cold turkey would hit the web like an earthquake.

Web browsers are starting to have "do not track" features built in. Allowing users the choice on whether or not to be tracked is a good thing, and would have a more gradual impact on web businesses. I wonder how many congressmen really understand how integral to the web experience we all know and love tracking is? It pays for many of our favorite websites (Facebook, anyone?). Take it away without granting time to find a new revenue model and they will have to charge. Steve DelBianco, executive director of Netchoice has a plan for bringing that point home to one Senator. I think he should implement it for all of them:

"I've asked for a waiver of Senate ethics rules so I can give Sen. Rockefeller a gift he really needs - an iPad. The senator can see for himself how interest tracking lets advertisers pay for all those free apps and Web services that regular Americans love to use."

How many of the sites you have a free account with would you be willing to pay for? How will they make money without tracking or making members pay? Those are the real questions. And until someone has a good answer, blanket "do not track" legislation is a bad idea.

Facebook Friday: Woman brags about barfight; Burglar posts picture on victims page

Originally posted 05/06/2011 on lubbockonline.com

Not so free brawling

Ruth Ramirez smashed a glass into the face of another woman at a Chicago area bar, then decided to brag about it on Facebook. The woman's face needed 32 stitches and she filed a police report. Now Ruth is sitting in jail waiting for someone to raise money for her 100,000 bond while her child wonders where mommy is.

Just turn yourself in

I thought I blogged about this back in December, but I can't find it. It bears repeating, anyway. Rodney Knight, Jr. broke into the home of Washington Post writer Marc Fisher last December and stole money, a coat and 2 laptop computers. One of the computers belonged to Marc's son, and Mr. Knight snapped a picture of himself wearing the coat and holding the cash with the webcam on the laptop. Then he posted the picture on the sons Facebook page. Well, his trial is over and he's been sentenced to 44 months in prison. Hard to convince a jury you didn't do it when you hand the police a picture of you with the stolen goods.

 

Managing certificates in Internet Explorer

Originally posted 05/05/2011 on lubbockonline.com

Today we're going to look how to add and remove security certificates from Internet Explorer 8.

1. Click on "Tools" from the Internet Explorer browser menu. Next, select "Internet Options."

Photobucket

2. Go to the "Content" tab and under the Certificates section, click on the "Certificates" button.

Photobucket

3. Select "All" from the drop-down box located next to "Intended Purpose." Use the scroll bar beside the last tab on the right to find the certificates listed by source type. Highlight the certificate you would like to delete. Next, click on "Remove."

Photobucket

4. Click "Yes" at the prompt to continue the removal process.

If you want to add a certificate, click "import" instead of export and go through the dialog, select the certificate and click ok.

Goodbye, Osama bin Laden

Originally posted 05/02/2011 on lubbockonline.com

Osama bin Laden has been killed after almost a decade in hiding. What does that mean? In the end, not much. It is an intelligence coup, one that, in terms of the difficulty of finding Mr. bin Laden is almost the counter-balance to the blunders that allowed 9/11 to happen. In terms of national security we will have higher risks when travelling to the Middle East for a while, our embassies and companies in the Middle East will be at greater risk of attack, and there will likely be more attempts to get a successful suicide bomber on a plane.

Bill Brenner on the CSO blog believes that bin Laden's death won't change much, and that's actually a good thing. Bill recently made a trip to Ground Zero in New York and was offended that people seemed to have forgotten what happened there. But it didn't take him long to realize that they hadn't forgotten, they had paid their greatest tribute to the victims of Osama bin Laden that could be paid (my words). They had refused to let Osama succeed in his primary goal. As much as our government has been affected by fear of terrorists, the people of New York City had moved past the attack and gone on with their lives. As has the rest of the country. We have not allowed terrorist to terrorize us. So in that sense, Osama bin Laden was a failure. Despite his greatest success and the attempts of many to use it to take away the personal liberties U.S. citizens have always enjoyed, we are still a nation of free men, not a police state. As long as we are Americans, that will not change.

I am not saying that President Obama, the intelligence community and our military don't deserve thanks and praise for killing Osama bin Laden. They do. He needed to be taken out. The fact that another will take his place doesn't change that. The fact that 9/11 was as much our ineptitude as it was his planning doesn't change that. Osama bin Laden attacked our country, and we didn't rest until he paid for that. Though administrations changed, though guiding political philosophy changed, we did not forget what Osama bin Laden had done and we did not rest until he paid. That also will not change.

That is a good thing.

Facebook Friday: Doctor posts ER stories on Facebook

Originally posted on 4/29/2011 on lubbockonline.com

Rebecca Herold reports at infosecisland.com that Dr. Alexandra Thran was disciplined by the Rhode Island Department of Health for posting Protected Health Information (PHI) on her Facebook page. She tried to post non-identifiable information, but one case was unique enough that someone identified the patient from the Facebook post.

On the one hand I can understand the doctors desire to share the interesting cases she worked on in the emergency room. But interesting stories or not, there have been laws on patient confidentiality for decades, and they were strengthened by HIPAA. Dr. Thran was given a reprimand and fined $500. It was a very lenient punishment, probably because Ms. Thran was obviously trying to keep the stories anonymous and it was only because of the unusual nature of the one case that there was an issue at all. But because the patient was identified in the one case she could have lost her license to practice medicine and could even have faced criminal charges.

There are 19 types of PHI in the list of protected data. Dr. Thran avoided all of them in every case but the one that got her busted. And in that one case she avoided all but #19: Other unique identifiers that can be attributed to a specific individual.

According to the review board Dr. Thran did almost everything right:

The board said she did not use the names of patients, and did not intend to disclose confidential information, but the nature of the injuries of one patient allowed an unauthorized third party to figure out who it was, the board ruled.

As soon as she found out someone had identified one of her patients from the information on her Facebook account Dr. Thran took it down. That also weighed in her favor, I'm sure. But she should never have put that information on Facebook in the first place. She could have gotten in trouble even if she'd been overheard talking to friends at Chili's. The potential harm, or the potential speed damaging information could be spread, is much greater on Facebook. Dr. Thran dodged a career ending bullet. I hope she learned from the experience.

How do you manage certificates in Firefox?

Originally posted on 04/28/2011 at lubbockonline.com

Yesterday I talked about what website certificates are. Today I'm talking about what you can do to control the certificates in your browser. It's not difficult, but it can take a bit of searching. We'll start with Firefox. If the post is too long Internet Explorer will be tomorrow. Maybe we can hit Safari and Opera after that.

Firefox 4 and Firefox 3 are exactly the same except for the very first step.

Firefox 4

Photobucket

Firefox 3

Photobucket

Go to the Firefox menu - options - options. The Options dialog will pop up.

Photobucket

Click on the 'Advanced' tab, then click on the "View Certificates" button.

Photobucket

Every browser has over a hundred recognized certificate authorities. It's hard to know which ones you really need and which ones you don't, but if you live in the U.S. it's probably safe to remove the certificate from TurkTrust. It's a Turkish certificate authority, and unless you visit Middle Eastern or Turkish websites, you probably won't encounter many sites using it. Once you've selected the certificate you can click on one of the buttons at the bottom of the window to view the details of the certificate, edit the trust level, export the certificate or delete it.

Photobucket

You can look through the certificates in your browser and do a little online research to decide which ones you think you can get rid of. If you want you can search online for certificate authorities to add, although there's not much reason for most people to add certificates unless they need to add one to access the internet through a proxy server. That's something you don't usually have to worry about unless your job requires it.

What are website certificates?

Originally published 04/26/2011 at lubbockonline.com

Have you ever tried to get to a website and gotten the message, "the security certificate is invalid," or something similar? That message means something about the bit of code that verifies the identity of the site is off. It might mean that the certificate is fake and the site is bogus, or it might mean there is small error and the site is legit. How can you tell?

Site certificates are used when a web site needs to use encryption to protect data in transit. There are a number of organizations that issue security certificates, including governments such as the U.S. and China. In general certificates are issued for two years. The main exception to this is the certificate issuers, who have 10 year certificates.

To tell if a site uses certificates all you have to do is look for "https" in the address bar or the locked padlock in the upper right corner of the browser window - the lock symbol does not always appear.

When you visit a website that uses a certificate your browser will check for a few things in the certificate like the issuer, the address of the website and the issue and expiration dates. If any of these are not correct your browser will tell you that there is a problem with the certificate and give you the option of making a one time or permanent exception. You can, if you want, examine the certificate before deciding what to do.

So how do you decide if you should trust a certificate? Unless your browser reports a problem it all depends on how much you trust the issuer. If your browser reports a problem, there are some things you can check:

 

  • who issued the certificate - You should make sure that the issuer is a legitimate, trusted certificate authority (you may see names like VeriSign, thawte, or Entrust). Some organizations also have their own certificate authorities that they use to issue certificates to internal sites such as intranets.
  • who the certificate is issued to - The certificate should be issued to the organization who owns the web site. Do not trust the certificate if the name on the certificate does not match the name of the organization or person you expect.
  • expiration date - Most certificates are issued for one or two years. One exception is the certificate for the certificate authority itself, which, because of the amount of involvement necessary to distribute the information to all of the organizations who hold its certificates, may be ten years. Be wary of organizations with certificates that are valid for longer than two years or with certificates that have expired.

 

Site certificates are an integral part of web security, but they aren't perfect. You still have to be careful and watch what is happening in your browser.

Is there a smart phone that doesn't track users?

Originally published 04/25/2011 at lubbockonline.com

Declan Mcullagh of CNET reports that Android also collects user data. This is similar to the complaint against Apple's iPhone and iPad last week. Google claims there is no user identifying data sent, but that isn't true, strictly speaking.

The article quotes Sammy Kamkar, a well known security researcher, as saying, ""It's not tied to a user, but it is a unique identifier to that phone that never changes unless you do a factory reset."

But it's worse than that. It may be impossible to truly anonymize data and have it retain it's usefulness for marketing purposes. AOL learned this. Netflix learned this. It's time we learned it. Police routinely request cell phone tracking data from providers, often without a warrant, and the Justice Department is pushing Congress to make it the law of the land that cell phone data can be searched without a warrant. Even if the data from cell providers is anonymized, current technology is more than adequate to allow clever people to attach a name, number and address to the anonymous data. According to Markus Ullman and Marco Gruteser all that may be necessary to identify a person is their location data:

Unfortunately, anonymous location samples do not fully solve the privacy problem. An adversary could link multiple samples (i.e., follow the footsteps) to accumulate path information and eventually identify a user.

No company should be able to just gather data on our whereabouts, our likes and dislikes, our political or any other preferences without our informed permission. But until we force them to stop, they won't. It's in their best interest to gather and use any information they can, either to sell or to use to tailor their offerings to us.

Why Facebook Friday? Anthony Wilson and Ellen Ewin, for starters

Originally posted on 04/22/2011 at lubbockonline.com

I'm going to start using Fridays to post stupid things people do online. In honor of the most concentrated mass of unbridled stupidity online I'm calling it "Facebook Friday."

Today we have two stories to make us feel superior to the great unwashed masses:

Professor tells students, "F*** Off, Republicans"

Craig Robinson of the Iowa Republican reports that Professor of Anthropology Ellen Ewin at the University of Iowa has learned that telling students with opposing views to "F*** Off" Republicans!" is not a good idea. Especially when using your university email account to respond to a university vetted and approved mass emailing. Granted it was a little off color, but seems to be an attempt to make light of being a (persecuted?) minority. Professor Ewin's response makes it seem that feeling a need to hide conservative leanings might not be entirely unjustified at the University of Iowa.

He knew he forgot to do something...

Bill Gallagher of WJBK Detroit reports that after getting a tip that the man they were looking for in a series of robberies was named Anthony Wilson they looked at his Facebook page. The page had a picture with him wearing the same clothes he wore to rob a bank. He's been arrested and is awaiting trial.

I don't want to spend all my time harping on the risks of self-inflicted injury online, but, like watching one of those 'dumbest' shows, I can't seem to stay entirely away, so every week I'll watch for interesting and amusing examples of self-destructive behavior online and post a few on Friday. Let me know what you think.

Should Apple map your travels? Should police seize your cell phone data?

Thanks to Kenny Ketner for pointing this Apple privacy invasion out to me. TalkingPointsMemo reports that Apple iPhones and iPads are tracking every move we make (if we own one). I would assume iPod Touches are also guilty. Sam Biddle, the author, has a map on the article showing everywhere he's been for the last six months.

At this point it looks like the information isn't transmitted to anyone, it's only gathered on the i-device and the computer it is tied to. But does that really matter? Why gather that much information on your customers? There is no reason if you don't intend to use it - or find a use for it. Which begs the question of whether or not Apple or any company has the right to be gathering the data in the first place. But even if you do have the right and you do have a use for it, gathering it could put your customers at risk in a number of ways. Which leads us into the second half of this post:

infosec island reports that Michigan state police are using data extraction devices to collect data from cell phones when they make a traffic stop, and have been for several years. According to the report the extraction devices used by the Michigan police are capable of breaking encryption if data collected is encrypted. According to a brochure for the UFED mobile data extraction device it can extract:

  • Call logs, including SIM deleted call history
  • Contacts
  • Phone details (IMEI / ESN, phone number)
  • ICCID and IMSI
  • Text messages (SMS), including SIM deleted messages
  • Photos
  • Videos
  • Audio files
  • SIM location information: TMSI, MCC, MNC, LAC
  • Image geotags

If that's not enough:

 

The UFED’s SIM ID cloning feature allows data extraction from PIN locked SIMs, phones with missing SIM cards, and phones without network service. The cloned SIM card also allows access phones without connecting to a network, preventing incoming calls and messages, while preserving the existing call and message history.

 

Now we have police downloading the data from cell phones of people who have done nothing more than be pulled over for speeding. Shouldn't that fall under the heading of unreasonable search and seizure? Today it's not unusual for someone to have more of their personal lives on their cell phones than in the filing cabinet in their home office. Maybe even more than is in their computer. To say that police can download that data without having to get a warrant or even have probable cause is a gross violation of privacy and civil liberties.

I can understand and to some extent agree with the "border" searches of laptops. Sort of. But the pseudo-justifications given for those searches and seizures do not apply to most, if not all, of the people giving up their cell phone data because an officer said they had to. If it was an iPhone, they've given their life history for the last 6 months. I can already see misuses and abuses for such information. Imagine if you happened to be in the area of an unsolved crime at the wrong time. It wouldn't be the first time limited circumstantial evidence has been hyped into a conviction.

The ACLU of Michigan has requested info on what types of data has been gathered and what is being done with it. The state has agreed - if the ACLU will cough up over $500,000 to pay for it. From here something smells rotten in the state of Michigan.

What data is gathered about us, how it is gathered and who gathers it should be something we have a lot more awareness of and say in. Apple's movement mapping and Michigan's data theft are two things that must be brought to a screeching halt.

Toshiba introduces self-encrypting/self-wiping hard drive

Originally posted on 04/15/2011 at lubbockonline.com

Toshiba has produced a secure laptop drive that has built in encryption and can be set to wipe itself if it is removed from the computer or the wrong access code is entered.

The drive comes in sizes ranging from 160GB to 640GB and spins at 7200rpm. It's a respectable drive, and the security measures are awesome. But you definitely have to make frequent backups. If anything happens to your computer your data could be history. But you should be backing your drive up, anyway.

Senators Kerry and McCain attempt privacy quarterback sneak

Originally published on 04/14/2011 at lubbockonline.com

Declan Mcullagh of the Privacy Inc blog at CNET acquired the text of Senators Kerry and McCains proprosed privacy bill. The good news is it is a step in the right direction. The bad news is it has a glaring hole in it's protection. Lord Humongous was right in yesterdays comment when he expressed distrust in the two senators.

The ‘‘Commercial Privacy Bill of Rights Act of 2011’’ is supposed to protect the privacy of U.S. citizens. But Declan says it has a glaring hole:

But the measure applies only to companies and some nonprofit groups, not to the federal, state, and local police agencies that have adopted high-tech surveillance technologies including cell phone tracking, GPS bugs, and requests to Internet companies for users' personal information--in many cases without obtaining a search warrant from a judge.

While disappointing, this isn't really surprising. It's right in line with recent attempts by the FBI and Justice Department to increase their ability to spy on citizens without need for warrants or oversight.

There is a constant struggle for control of information between citizens and governments. The more control over citizens information government has the more control it can have over them. For the first time in history it is trivial for the government to know more about citizens than they know about themselves. It is the nature of government that it will use that ability unless we insist controls and protections be put in place. And we will have to insist. Our representatives may start out working for us, but after a time in Washington (or Austin) they become, by definition, part of the government. Working in our interest becomes a conflict of interest for them, although they don't see it that way.

Kerry and McCain introduce privacy bill

Originally published on 04/13/2011 at lubbockonline.com

Juliann Francis of Bloomberg reports that Senators John Kerry and John McCain have introduced a privacy bill in the Senate called the "2011 Commercial Privacy Bill of Rights Act." The bill will require companies to limit online data collection. Somewhat.

The bill makes use of the Federal Trade Commission (FTC) and state attorney generals to make sure companies comply. It doesn't have a "do not track" component and has provisions for companies to get exemptions from portions of the bill by designing privacy policies to be approved by 3rd parties vetted by the FTC.

I haven't seen the actual bill yet, but it's a safe bet it doesn't go far enough. It's also a safe bet that it goes as far as it can at this point. A lot of companies count on the income they make gathering and selling consumer information. The privacy situation shouldn't have been allowed to get to this point, but to suddenly cut off that revenue completely could be devastating to the online economy, and maybe to the offline, too. So we must make use of the slippery slope political groups scream about, getting what we can now and pushing for more once everyone is used to the level we've just set.

Texas exposes 3.5 million people's identifying data

Originally posted on 04/12/2011 at lubbockonline.com

The State of Texas announced yesterday that it had exposed the personal data of roughly 3.5 million people online.

The press release pulled no punches, describing how the data had been mishandled and procedures not followed:

 

The data files transferred by those agencies were not encrypted as required by Texas administrative rules established for agencies. In addition to that, personnel in the Comptroller’s office incorrectly allowed exposure of that data. Several internal procedures were not followed, leading to the information being placed on a server accessible to the public, and then being left on the server for a long period of time without being purged as required by internal procedures. The mistake was discovered the afternoon of March 31, at which time the agency began to seal off public access to the files. The agency has also contacted the Attorney General’s office to conduct an investigation on the data exposure and is working with them.

 

The information came from the Teacher Retirement System of Texas (TRS), the Texas Workforce Commission (TWC), and the Employees Retirement System of Texas (ERS). There is an information website at www.TXsafeguard.org and an information line at 1-855-474-2065 if you think you might be one of the people whose data was exposed.

The press release and website don't say anything about repercussions for the employees who exposed the data, but I'm sure there will be some. It looks like almost every policy regarding data transferral and protection the State of Texas had was ignored. They may have even created a couple of new ways to mishandle data. At the very least they should be move to positions that don't require handling sensitive data.

Friday, March 16, 2012

A few steps to staying more private online.

Originally published 04/08/2011 at lubbockonline.com

The breach at Epsilon has started discussion on how serious having your email address stolen really is. The fact is, having your email address stolen is as dangerous as you allow it to be. To help with keep the danger level down, here are some things you can do to protect yourself:

1. Don't click on links in email. If you want to go to the site, type in the URL in your browser yourself. With HTML email it is childs play to disguise an email as being from someone you trust and hide malicious links behind what looks like a legitimate link.

2. Use the latest version of Firefox for your web browser. You can argue over what is the most secure browser, but Firefox has some very handy addons.

3. Once you have Firefox, there are two very helpful addons: https-everywhere and NoScript. Noscript can be found using the Firefox addons and https everywhere can be downloaded from the eff.org website.

4. Update your software.

5. Keep your mouth shut and your fingers off the keyboard. Before you give anyone any information about yourself, think about whether you need to.

6. Open a garbage email account. Give it to websites that require you to register. Use your main email account for friends and family.

7. Install anti-virus and anti-spyware and keep them updated.

These are just a few of the things you can do to protect your identity online, but they are a good start.


updated to add important information

Should your employer care about your (off time) privacy?

Originally posted 4/7/2011 at lubbockonline.com

Have you ever thought about how the things you do online when you're not at work could affect your job? I'm not talking about a careless rant on Facebook or an ill-considered tweet about your boss. I'm talking about all the information you put up online. Even if all you do is use Google to find information you've probably put far more than enough information online to identify you.

In 2006 AOL released "anonymized" search data that was used by the New York Times to identify several searchers. For an idea of the kinds of things available in search data, look at the Consumerists post on AOL User 927. I'm sure he didn't want anyone knowing what he was searching for. Just to make sure we understood how much we tell about ourselves online, around the same time Netflix released anonymized data that ultimately outed gay and lesbian members, or would have if the researchers had publicly released the data. An in-the-closet lesbian mother sued Netflix over their release of the data. The researchers who were able to determine sexual preference were also able to determine political affiliations. All based on the movies people rented and rated.

If so much can be discovered from supposedly anonymized data, imagine what can be learned from your Twitter and Facebook accounts. It's not uncommon for people to post their full name, birthday, all the schools they attended, the names of most of their family, pets past and current, favorite everything, first everything, and just about everything else. How many of those things are used as security questions to recover you password for your online banking? How many of those things, or some permutation of them, are used for passwords by people? How many of them are used for passwords related to work?

But even if you use randomly generated passwords all of that information is useful to bad guys. It is the ammunition for the weapons used in social engineering attacks. With the information on many peoples Facebook pages a skilled social engineer can gain trust, either from you or from someone you know. After all, if he knows so much about you he must know you. Using that trust he (or she) will get information a person would normally never give someone they barely know. It works better than you might think. A lot better. But if a salesman has ever sold you something you didn't really want or need, or if you've ever watched John Edwards on "Crossing Over" you know that.

Without privacy you can't have security, and many of us don't even think about privacy while we're online. It's bad enough when I think about all the individuals exposing themselves to all the bad guys on the internet. Then I think about the CSO's who are trying to protect data hidden behind passwords and relationships tied to all that data being published on Facebook, Twitter and the rest of the web and I wonder that we manage to keep any data secret at all.

Reppler analyzes your Facebook risks

Originally posted 4/6/2011 at lubbockonline.com/glasshouses

Reppler is a free service that will analyze and monitor your Facebook account and see if you have left any holes in the privacy settings and tell you what kind of tone your wall posts have. I'm not sure how accurate a lot of the information is, but having your privacy settings checked by a third party doesn't hurt. Having someone monitoring to tell you if you do something that could be career threatening isn't bad, either. But remember, even if Reppler says your information is safe, all it's really telling you that all of the settings are set to the level of privacy allowed by Facebook. Even at it's most private, Facebook is pretty wide open.

Using Reppler is easy. You just go to the site, give it your Facebook login and permission to check everything - yes, it wants everything, but no more than you've already given Facebook and most of the apps you use.