Thursday, September 30, 2010

What is unreasonable search and seizure?

Patrick Jonsson of the Christian Science Monitor writes that the federal government has bought and is using vans equipped with backscatter x-ray technology to scan random vehicles


This is borderline unconstitutional. The 4th Amendment protects us from unreasonable searches. But some people don't see this as a 4th amendment issue, but as solely a security issue. As a security issue, it is already been implemented:


On Tuesday, a counterterror operation snarled truck traffic on I-20 near Atlanta, where Department of Homeland Security teams used mobile X-ray technology to check the contents of truck trailers. Authorities said the inspections weren't prompted by any specific threat.

I see a problem here. They are scanning vehicles for no reason. There is no threat prompting the scans, they're just doing it to see if they can find anything. To my mind that is an unreasonable search. Police have broad powers to search a car on the highway, but they have to have a reasonable suspicion that something illegal is happening.


Several of the statements by officials in this article show a fast and loose attitude towards citizens rights. One says that there isn't enough detail to embarrass anyone. Another says law enforcement already has broad search and seizure power on highways. Neither statement shows a very high regard for personal privacy or freedom. They rank up there with, "If you're not doing anything wrong you don't have anything to worry about."


These vans are an excellent tool in our anti-terrorism arsenal - if used properly. But using them for random scans of vehicles that there is no reason to think may be involved in terrorism is an abuse. Like any abuse, it must be stopped.

Wednesday, September 29, 2010

Administrations desire to wiretap web could freeze the cloud

Yesterday I blogged about the feds desire to wiretap the internet. So did a lot of other people. One of the best was by Rich Mogull, CEO of Securosis. His post on the Securosis blog, Proposed Internet wiretapping law fundamentally incompatible with security gives only a glancing nod to privacy issues, but shows the hard technical and business realities of what the administration is proposing. And those are the realities that will put a stop to this proposal. As I've said before, as the government grows in size and scope, citizens privacy becomes a barrier to continued growth. That means that privacy issues will have less and less effect on government plans.

Of course, there comes a point when private enterprise becomes a roadblock to continued government growth. We're almost there - but that's a topic for a different blog.

Rich read the same NYT article I did, and saw three likely requirements for the law as reported:

  • Communications services that encrypt messages must have a way to unscramble them.
  • Foreign providers that do business inside the United States must establish a domestic office capable of performing intercepts.
  • Developers of software that enables peer-to-peer communication must redesign their services to allow interception.

Looks simple enough, doesn't it? But that apparent simplicity reveals a fundamental ignorance. The first might be fairly simple, technically. But complying with it would make hacking into a system, whether using social engineering or technical means, much simpler.

To allow a communications service to decrypt messages, they will need an alternative decryption key (master key). This means that anyone with access to that key has access to the communications. No matter how well the system is architected, this provides a single point of security failure within organizations and companies that don't have the best security track record to begin with. That's not FUD -- it's hard technical reality.

What business wants to make it easier for hackers to break in? But that is what this law would do. And it won't just affect businesses. Like your electronic bill pay? Say good-bye to it. Unless I miss my guess, this lovely idea would put banks out of compliance with either Sarbanes-Oxley or PCI-DSS, or both. For that matter, the credit card industry would probably have to shut down ... Maybe this one isn't such a bad idea. ;^)

The second point has the most potential for blatant harm. It could do serious damage to our international reputation, strain relations with those friendly to us, and possibly break down fledgling relationships with countries who are not necessarily well disposed to help us.

Requiring foreign providers to have interception offices in the US is more of a political than technical issue. Because once we require it, foreign companies will reciprocate and require the same for US providers. Want to create a new Internet communications startup? Better hope you get millions in funding before it becomes popular enough for people in other countries to use it. And that you never need to correspond with a foreigner whose government is interested in their actions.

Peer to peer networks, the third point, perhaps present the greatest difficulty technically:

There are only 3 ways to enable interception in peer to peer systems: network mirroring, full redirection, or local mirroring with remote retrieval. Either you copy all communications to a central monitoring console (which either the provider or law enforcement could run), route all traffic through a central server, or log everything on the local system and provide law enforcement a means of retrieving it. Each option creates new opportunities for security failures, and is also likely to be detectable with some fairly basic techniques -- thus creating the Internet equivalent of strange clicks on the phone lines, never mind killing the bad guys' bandwidth caps.

Rich goes on to point out some other issues. Such as handing oppressive regimes tools they don't have for monitoring their citizens. That shows a lack of foresight on the part of the enforcers, but isn't as bad as (what I see as) outright lies claiming that this is just an effort to get back capabilities that the fluid nature of the internet and the readily available strong encryption tools have taken away. It is not that. It is an attempt to find an easier way to spy and get it built into the fabric of the Internet. 

He also points out that the police do need the tools to do their job. But doing their job should not interfere with the security and operations of legitimate businesses just to make law enforcement's job easier.

Tuesday, September 28, 2010

Law enforcement wants to wiretap the Internet

The New York Times reports that the Obama administrations wants to make it easier to wiretap the internet. Charlie Savage reports that the government is seeking the ability to monitor any communication on the internet - including encrypted communication. They want to do this by requiring communications providers to have back doors that allow them to decrypt communications if a legal wiretap order is obtained.




The desired law would require companies that provide encrypted email, such as Research in Motion (RIM), and social networking sites, and peer to peer messaging providers would all have to provide the means for government access to communications over their networks. It would also require people writing software for peer to peer networks to include the means for the government to spy on their users.




What you think of this idea depends, in part, on how much you trust the government not to abuse the system. It also depends on how much faith you have in the idea that these back doors will not be accessed and exploited by hackers:


Steven M. Bellovin, a Columbia University computer science professor, pointed to an episode in Greece: In 2005, it was discovered that hackers had taken advantage of a legally mandated wiretap function to spy on top officials’ phones, including the prime minister’s.



Gotta love it. Legally mandated 'enforcement' access is turned on the enforcers. And Dr. Bellovin believes that once back doors are engineered into the systems in the US, they will be exploited. I have to ask, with the number of unknown exploits that are discovered by white and black hat hackers every day, how long will it take the black hats to find the ones they know are there because they are legally required?




Some of the reasons being given don't really show a need for the proposed law, but a failure to understand the technology:




But as an example, one official said, an investigation into a drug cartel earlier this year was stymied because smugglers used peer-to-peer software, which is difficult to intercept because it is not routed through a central hub.”



So in order to make it possible for the government to spy on peer to peer networks, we're going to require they be routed through a central server? Then it's no longer peer to peer. And the added expense will kill most peer to peer softwares, anyway. But that's really what the government wants to happen.




And for some reason, government officials seem to think there is a reasonable similarity between cell phones and the internet: They also noted that critics predicted that the 1994 law would impede cellphone innovation, but that technology continued to improve. In 1994 cell phones were still a fledgeling technology. They had been around for a couple of decades, but the networks and userbase were still (relatively) small. The internet is a 40 year old technology involving a huge network with almost 2,000,000,000 users, almost 300,000,000 of them in North America. The amount of legacy hardware and software is almost unimaginable. Comparing making a fundamental change to the structure of the Internet now to making a fundamental change to cell phones in 1994 is ridiculous.




I agree with Benjamin Franklin. To paraphrase, anyone willing to give up some freedom for some security will wind up securely without freedom. I'm distrustful of anything that makes it easy for the government to spy on law abiding citizens, even (or maybe especially) if it is pushed as being necessary to catch bad guys. The Patriot Act was "necessary." Tapping the vast majority of phones in the U.S. was "necessary." Making it possible for the government to access all communications is "necessary." I don't think so.

Monday, September 27, 2010

Fire-up your firewall

What is a firewall, and why should you use one? A firewall is basically a gatekeeper between your computer or network and the wider internet. It prevents communication between your network and the internet that you don't want, while allowing communication that you do. That is why it's a good idea to always have a firewall running to help protect you from many of the dangers of online life.


There are two types of firewalls, software and hardware. If you are running an OS that is even slightly recent (Windows XP and up, OS X, any Linux or Unix) you have a software firewall available on your computer. If it is fairly recent (Windows XP SP 2, OS X 10.4, Linux depends on distro) the firewall is already activated and protecting you. One way to find out is to go ShieldsUP!, and follow the instructions. ShieldsUP! checks the first 1056 ports to see if your firewall is blocking them.


If you have a router between your computer(s) and the internet, it probably also functions as a hardware firewall. Most, if not all, let you open ports for online gaming or other programs. The software firewall that came with your OS probably will, too. But it may be a lot harder to setup.


Most consumer routers use Network Address Translation (NAT). NAT hides the real addresses of the computers connected to the router making it harder for bad guys to access them. Originally that was an additional benefit, but as viruses, trojans and other means of attacking networked computers have proliferated, it's become an essential function.


If you would like to learn more about firewalls, check out these links:

There are other topics you can look up to understand more about firewalls, such as Network Address Translation (NAT) and proxy servers. And don't forget anti-virus and anti-spyware protection. It takes more than just a firewall to protect a modern networked computer.

Friday, September 24, 2010

Enter the evercookie

Security researcher Samy Kamkar has created what he calls "evercookies" and others are calling "frankencookies." I could add, "zombiecookies." Like Frankensteins monster, they are created from 10 different types of data storing objects. Like zombies, unless you completely eradicate all of it's components, the evercookie will return. 


On Samy's evercookie page he gives some details, along with a demonstration and two different links to download the source code. Among the details he gives are the types of storage objects used to retain and resurrect the data:


Specifically, when creating a new cookie, it uses the
following storage mechanisms when available:
  • Standard HTTP Cookies
  • Local Shared Objects (Flash Cookies)
  • Storing cookies in RGB values of auto-generated, force-cached
    PNGs using HTML5 Canvas tag to read pixels (cookies) back out
  • Storing cookies in and reading out Web History
  • Storing cookies in HTTP ETags
  • Internet Explorer userData storage
  • HTML5 Session Storage
  • HTML5 Local Storage
  • HTML5 Global Storage
  • HTML5 Database Storage via SQLite

Samy provides a demonstration that produces supposedly non-traceable evercookies, cookies with just enough information to prove the cookies have been created. He notes that private browsing in Safari defeats evercookies. I tested Firefox and it also killed evercookies in private browsing mode. Both only kill evercookies if you are already in private browsing mode when you the cookies are placed. Safari's reset option will not kill an evercookie.

Evercookies are a heinous development - from a privacy point of view. To merchants and ad services they are a gift from the Internet gods. Before we have a good answer to Flash cookies, evercookies appear, making Flash cookies look positively ephemeral. Because they are comprised of several different files of several different types in multiple locations they are hard to find, and if any piece of an evercookie is left behind the entire cookie can be recreated. If it wasn't already bad enough, Samy is seeking more ways to make evercookies hard to find and kill.

Privacy is in large part control of information. The more control you have over your information, the more privacy you have. The more others control your information, the less privacy you have. Things like Flash cookies and evercookies remove control of your information from you and give it to others and are designed to make it hard for you to get rid of them. That is enough reason for me to dislike them.

Thursday, September 23, 2010

Maine high court limits damage claims in data breaches

A little over two years ago Hannaford Brothers, a large grocery retailer in the Eastern U.S., suffered one of the worst data breaches (bankinfosecurity.com) up to that time. There were many lawsuits filed as a result of the breach. This week the high court in Maine ruled on the validity in some of those cases.

According to a report on WGME 13 in Portland, Maine, the high court ruled that you cannot sue for damages unless you suffer "financial losses, physical harm or identity theft."

I'm sure this disappointed the people trying to regain money for the time they spent straightening out the mess caused because their credit card info was stolen in Hannafords' breach. I'm not sure I agree with the court decision, but I can understand it. I think. On the one hand, the time and effort spent was directly related to the data breach. On the other hand, there was no financial cost, no damage, nothing that was "lost" (except time, the one thing you can't replace), and the cost of paying reparation to people who didn't suffer any actual harm could open the gateway for all kinds of lawsuits with dubious claims that would tie up the courts, often with little chance of success.

The law is different in every state, so the Maine decision may not have any bearing on lawsuits filed anywhere else - but it is a precedent, and lawyers will pull precedents from where ever they can find them. Only time will tell how much affect this decision will have outside of Maine.

Google Apps now more secure than many banks

Who knew that Google would make it's free app offerings more secure than many banks make account access? Mark Hachman at PCMag.com reports that Google Apps Taps Phone for Two-Layer Security.

It's pretty cool. It's only available right now for enterprise customers right now, but it is going to be available soon for everyone who uses Google Apps and has a cell phone via sms texting. There are apps for Android and Blackberry phones, with an iPhone app in the works. This is a good thing. It makes it much more difficult to hack into someone's Google Apps and gives yet another multi-factor authentication option.

After reading the PCMag story, I checked the "Krebs on Security" blog to see if he had anything to say about the new Google Apps feature. He blogged about it earlier this week. In Google Adds 2-Factor Security to Gmail, Apps he notes that the free two factor authentication offered by Google is better than that offered by many banks. The lousy online security offered by many banks is a topic Brian Krebs talks about a lot, and one I talked about in regard to Plains Capital Bank suing their customer last year. Brian saw a bonus in Google's new feature that didn't occur to me, although it seems an obvious way to pay for people making free use of it. Offer the service to banks. Google can probably offer the service at a nice profit for Google and still be far cheaper than solutions that require banks to buy hardware. The hardware will be provided by the customers. It's a win for everyone.

Will Facebook make fair trial impossible?

Over the past fifty years there has been an ever growing problem in taking people accused of high profile crimes to trial. How do you insure an unbiased jury when the pool has been tainted by repeated reports of facts, speculation and fiction on the case? With the popularity of the internet and the instant reporting of Twitter and Facebook this problem has become even more severe.

Which bring us to the case of a 16 year old alleged victim of gang-rape in Pitt Meadow, British Columbia. The story was reported on CTV News British Columbia by Julia Foy. A group of males allegedly raped her at a rave. The stories are fairly predictable: The men say she consented, the girl and the police say she didn't. Both sides agree that she had taken drugs that night.

A Facebook page, "Support-for-16yr-old-victim-in-Pitt-Meadows" was put up in defense of the girl, and before long a second page, "Reasonable Doubt in Pitt Meadows, was formed to support the alleged rapists.

Not surprisingly, the group supporting the girl has many more friends. Even if you are willing to be open minded and admit that the men may be telling the truth, few people will want to come out and say publicly that none of us know enough to say who's story is true. Especially since the men are guilty of statutory rape, regardless.

I realize that I am, in a sense, perpetuating the problem I am complaining about. I'm probably not going to have much affect on the jury pool, even if they change venue, but as time passes and ever more people are connected the viral nature of the internet will make it harder and harder to find unbiased jurors. As I write this there are 9200 followers of the "Support" pages and a mere 92 followers of the "Reasonable Doubt" pages. Pitt Meadow has a population of about 17,500. I know that not all of the followers are from Pitt Meadow, but the odds are that most are from within the coverage area of local news, which means there probably 9200 potential jurors who have already made up there minds about the case.

This is not one of the things I think about when I talk about the importance of privacy, and the problems of Facebook. But it is a problem. And it is important. Accused criminals have a right to privacy that must be maintained until the trial is over for a fair trial to be possible. To be fair, it's not so much a Facebook problem as it is a human problem, and it would exist whether Facebook allows such groups or closes them down as soon as it hears about them. Add Twitter and the myriad other social networking sites, and we are fast approaching a time when unbiased juries are hard to find. So, admitting that, how can we protect the right of the accused to a fair trial with an impartial jury in an age of instant communication?

Study shows security of medical data improving, still bad

Earlier this year Kroll Fraud Solutions (Kroll) and Healthcare Information and Management Systems Society (HIMSS) released the results of their second biannual study of patient data safety at healthcare providers.

The study noted that there may be no other place in private industry that is as rich a target for identity theft and data fraud as healthcare providers. They can possess just about every type of identifying info on their patients: Social Security numbers, drivers license numbers, insurance policies, religious affiliation, addresses and phone numbers, etc.

According to the study there have been over 110 breaches of personal data from healthcare organizations since January 2008. The breaches have affected over 5 million people. Almost half of the of them involved employees - negligence or loss was the cause slightly more often than malicious employees. The next biggest cause of data breaches was theft, with system hacks, viruses coming in a very distant 3rd.(1)

According to the study most health organizations are taking steps to insure the security of patient data, but hospitals focus on responding to a breach to the detriment of preventing them.(2) But most hospitals are open to change and to getting help to improve their data security. Not only is the cost of a data breach high and getting higher, they don't want their customer/patients harassed or given any other reason to sue them.

Despite ever increasing regulatory requirements, or maybe because of them, the number of data breaches at hospitals in the past 24 months has increased. Part of the problem is the attitude surrounding patient data. It's not that hospitals don't want to protect their patient data, it's that their efforts since HIPAA was first passed have been geared to react to a breach, not prevent it. Until that changes we will continue to have frequent data breaches. Happily, hospitals seem willing to learn how to better protect their patients data. The only question is, how long will it take?

(1) 2010 HIMSS Analytics Report: Security of Patient Data commissioned by Kroll’s Fraud Solutions p3

(2) ibid p5

Diaspora - Social Networking startup learns security no easy task

Diaspora is the brainchild of four new York University students. Earlier this year they announced their plan to create a privacy respecting Facebook clone and tried to raise a modest $10,000. They were inundated with over $200,000 in donations.

Dan Goodin reports at the Register that Diaspora has released pre-Alpha code for it's open source version of Facebook. Pre-Alpha code is code that has been written and may perform the basic functions intended, but has not been tested. It may (and probably will) have major bugs and flaws that will have to be found and fixed before final release. And Diaspora's initial code definitely has flaws. Dan reports that Patrick Mckensie, a software developer, has found major security holes:

“The bottom line is currently there is nothing that you cannot do to someone's Diaspora account, absolutely nothing,” said Patrick McKenzie, owner of Bingo Card Creator, a software company in Ogaki, Japan.

“About the only thing I haven't been able to do yet is to compromise the security of the server that Diaspora is installed on. That's not because that isn't possible. If a professional security researcher goes after this, I have every confidence that they will be able to do that.”

That's pretty extreme, even for pre-alpha code. But the good news is that the project is open source, so there are more eyes on the code than just the initial programmers. So the odds that the errors will be fixed is pretty good. But there's also bad news. Mackenzie participates in the projects email list, and has seen people trying to get Diaspora sites running, despite the programmers clearly stating it's not ready for the real world yet. He's concerned they're going to be burned very badly because they don't understand the problems.

Diaspora is a good idea. Since the first boy asked the first girl to watch the stars with him, people have been social without revealing everything about themselves to everybody. Facebook ignores that history, claiming that it is only giving people what they want - despite public outcry everytime privacy controls are opened up. People use Facebook for a number of reasons, one being that despite it's flaws, it's the best game in town. It could use some real competition, but Diaspora has an uphill battle both because of it's code problems and the fact it's set it's sites on the Goliath of social networking. Only this Goliath is the size of the Empire State Building, and the David that is Diaspora is smaller than an ant.

Good luck, Diaspora.

Proof that without privacy, security is moot, and strong passwords still matter

Elinor Mills rights the "Security Complex" blog on cnet.com. She was talking to the founder of People Security, a security consulting firm, when he said that it's easy to hijack email accounts. She challenged him to hack hers. She details the experience on her blog.

It's fascinating. He started knowing only her name and employer. Using mostly readily available and free resources to find out information that might be about her. His big gun was Ancestry.com, which anyone can access either as a free trial or for a relatively cheap fee. 

He had a time limit of an hour, which turned out to not quite be enough time. But Elinor continued what he'd started, and knows that with just a little more time he would have had access to her account. She also notes that, as a someone who writes about security issues for a living she is more security conscious than most, and probably a little harder to crack. But the amount of information that could be gathered in an hour was shocking, and all he was trying to do was figure out her email password.

Read the article. Ms. Mills experience is strong evidence that without privacy you cannot have security, and vice versa. Being able to control who can access information about you is the only way to have privacy

Thursday, September 16, 2010

Religiously filtering search

Habiba Nosheen on NPR did an interesting story yesterday on a new trend in internet search. The internet is a wide open medium. There are few, if any, limits on what can be published to the web. That is a blessing because we can find information on almost any subject by typing a few key words in a search engine. It is a curse because often we will get information we never intended - and may not have wanted. Sometimes we get information that's downright repulsive or directly counter to our beliefs. Now there are search engines that cater to groups, specifically religious groups, by filtering out content that does not conform to the religious belief system. There are sites for Jews, Muslims and Christians

This is an interesting development, and perhaps an obvious evolution from filtered content. Filtered search is similar to filtered content, but more flexible. When you subscribe to a filtered content provider you can search using any search engine, but may not be able to access all of the results that come up. With filtered search you will be able to access any link in the results, and if you either don't get the result you were expecting or think you're missing something you can go to a traditional search engine and access any result that pops up.

The article names three faith-centric search engines:

seekfind is a Christian search engine that appears to run it's own indexes.

jewogle is a Jewish search engine powered by Google.

I'mHalal is an Islamic search engine that also appears to do it's own indexing. I like that it has general web search, news search, and Qur'an search.

Filtered search engines using religious guidelines is a pretty neat idea. They allow people to get online who might not be able to take advantage of the World Wide Web without the protection filtering provides, but don't limit greater access if it is needed.

I wonder how well the Christian search engine conforms to the idea of being in the world but not of it? But I wonder the same thing about Christian bookstores, movies, and other "Christian" things that isolate us from the world we're supposed to be "salt and light" to.

Monday, September 13, 2010

Don't eat Eric Schmidt's ice cream

Google was called to task recently by insidegoogle.com for privacy statements of CEO Eric Schmidt. It took the form of a 15 second video played on the jumbotron in Times square. You can see it here. But that wasn't enough, they put up a longer version with a voice track on their website and on Youtube here.

The shorter video has the creepier appearance, relying on "Schmidt's" facial expression to convey the wickedness of Google's data gathering, but the longer version gives examples of what Google might know about you. Unfortunately, they chose two examples that fall right in line with the paraphrased Eric Schmidt quote they use, "If there's anything you don't want anyone to know, you shouldn't be doing it in the first place." There are plenty of things that you might not want people to know, but that are completely legitimate. I guess they don't have enough of a 'creepy factor' for an ad like this, though. Google's response to the videos was very sedate, or at least I didn't see loud objections or denials. They made changes to clarify their privacy policy and even, after initial refusal, allowed insidegoogle.com to purchase advertising on Google for the purpose of criticizing Google.

Google, possibly more than any other company - even Facebook - knows us better than we know ourselves. They talk about stored data being anonymized, but for it to be useful in the ways Google uses it there has to be a way to connect it to us. That's how personalized searches, search term suggestions and the other little perks we take for granted now that didn't exist just a few years ago work. If Google can connect it to us, it's possible that someone else might obtain it and make the same connection.

So is Eric Schmidt one of several 'dark lords' of internet data gathering? Or is he a messiah, using the personal data we gift him with to improve our internet experience and grant us greater and more personal online lives? Or is he a well meaning businessman who really doesn't understand the implications of what he is doing? I doubt that any of those are completely true. But it is true that Google's business could not exist as it does if it didn't have access to as much information as it can gather from us. So I imagine that there is a little of the sinner and the saint in Mr. Schmidt's motives, and perhaps a little of the naive visionary as well. But regardless of his motivations, it's our job to make sure that Google and companies like it only gather information we want gathered. To do that we have to know what information they are gathering, why, and what is being done with it, and they should be willing to tell us.

Take your headlines with a grain of salt

Earlier this week Time reported that Kosuke Tsuneoka, a kidnapped Japanese journalist, was freed thanks to Twitter. It sounds really good, but after reading several reports, I didn't see the connection. Sure, Mr. Tsuneoka did manage to get a message out by tricking his captors into letting him use Twitter - to show them how to use it. He was freed a few days later, but no one can actually show a connection. I finally came across a story on Newser that admitted as much.

It was a good headline, all the variations of it: "How Twitter helped free a hostage," "Journalist tricks captors with Twitter," etc. But it didn't have anything to do with the real story, which amounted to, "Muslim journalist freed after five months captivity."

The funny thing is, there are probably true stories out there, if anyone looked hard enough. But they aren't stories about (Muslim) Japanese journalists who tricked their ignorant (not really) Taliban captors into letting them send out a Twitter message. At least the part about the Twitter message was real. But did it really call for such sensationalist headlines that only undermine the reputations of the sites that use them?

Facebook: Improving the stalkers life?

allfacebook.com reports that Facebook is experimenting with a new feature called "subscribe." This feature will allow you to subscribe to friends feeds and be notified whenever they do anything on Facebook. The interesting question at this point is, who can you subscribe to? How will it work? It could be the ultimate stalking tool, but it could also make Facebook even more interactive and immediate - and more like twitter.

While my mind is still spinning thinking of the implications, Dan Tynan has a slightly irreverent article on PCWorld taking a look at the possibilities, and at the questions we should ask before freaking out. And he's right. Depending on the implementation, subscribing could be a boon to Facebook users (and body blow to twitter). Improperly done it could be a stalkers best friend. Odds are the implementation will be somewhere in-between at first. Unfortunately Facebooks history makes me wonder which direction it will tend in the future.

But until then, if the trial is successful, enjoy Facebook subscriptions, and remember to keep an eye on those privacy settings.

Strong Passwords - not really so important?

It's been several years since I read Bruce Tognazzini's "D'ohLT #2: Security D'ohLTs," an article about the ridiculous steps security experts go to to secure systems, and why the real effect is to reduce security. Bruce is a human interface specialist who used to work for Apple computer, among many others. While he is known for his ability in computer/human interfaces, reading just a few of his articles has made it clear that everything we do is human interfacing, and what kind of result we get is in part dependent on how well we take that into account.

This brings us to the point of this post. One of Tog's points is that password requirements often guarantee that passwords will be stickynoted to the monitor, or under the keyboard, or somewhere else easy to get to that totally undermines the purpose of having a password in the first place. That was in 2003. It appears that security experts are beginning to get the same idea a mere seven years later. In the NY Times Digital Domain column, Randall Stross reports that some security experts are becoming less concerned with passwords and more concerned about threats that can undermine or circumvent password security:

Here’s one threat to keep you awake at night: Keylogging software, which is deposited on a PC by a virus, records all keystrokes — including the strongest passwords you can concoct — and then sends it surreptitiously to a remote location.

“Keeping a keylogger off your machine is about a trillion times more important than the strength of any one of your passwords,” says Cormac Herley, a principal researcher at Microsoft Research who specializes in security-related topics. He said antivirus software could detect and block many kinds of keyloggers, but “there’s no guarantee that it gets everything.”

So what is leading the security professionals - long time proponents of strong, hard to guess (and remember) passwords - to consider simpler passwords as a viable option? The real world experience of millions of users on sites like eBay, Amazon, and Paypal. These are sites that have users financial information - bank accounts, credit card numbers, and can access them. Considering the simple requirements for their passwords, you would expect these sites to have breaches all the time. But they don't. Why? One possibilty is tht most commercial web sites lock you out for a period of time, anywhere from an hour to a day, after a certain number of failed attempts. According to experts quoted by Mr. Stross, that limited number of fails followed by a lockout period is key:

A short password wouldn’t work well if an attacker could try every possible combination in quick succession. But as Mr. Herley and Mr. FlorĂȘncio note, commercial sites can block “brute-force attacks” by locking an account after a given number of failed log-in attempts. “If an account is locked for 24 hours after three unsuccessful attempts,” they write, “a six-digit PIN can withstand 100 years of sustained attack.”

That's pretty good, and good enough for most people. My passwords are a little tougher than that, but not much, and I've been using the same passwords on eBay, Amazon and Paypal for at least 5 years now. I think I'll keep 'em a little longer.

Some interesting links

I don't necessarily agree with all or any of these articles, but I thought they were interesting.

What's the long term result of national ID? Maybe this.

Obama proposes business tax credit. Pays for it by robbing Peter to pay Paul.

Is the healthcare act unconstitutional on because it conflicts with the First Amendment?

According to the Examiner, Congress wants to double tax U.S. Oil. Haven't we been through this before?

Hope you find these interesting.

Online privacy is more complicated than many realize part 2

Last post I looked at the first five of Paul Rubins' "Ten fallacies about web privacy." The subtitle, and part of his fifth fallacy is, "We are not used to the concept that something can be known and at the same time no person knows it." That is a true statement, but it's a statement about a snapshot in time. There are data breaches every day with thousands, tens of thousands and even tens of millions of peoples private data being stolen. Just because only a computer knows it today is not garauntee a computer won't tell an unauthorized someone tomorrow.

That said, let's take a look at fallacies six through ten.

6. Information can be used for price discrimination (differential pricing), which will harm consumers. Paul makes a good point that differential pricing isn't necessarily all bad. With good data a business might charge people who are willing to pay more the higher price they are willing to pay, making it financially and technically possible for them to charge less for people who are unable to pay the higher price.

7. If consumers knew how information about them was being used, they would be irate. I think Paul isn't looking deep enough in his response to this fallacy. In fact, I'm not sure his response refutes it:

When something (such as tainted food) actually harms consumers, they learn about the sources of the harm. But in spite of warnings by privacy advocates, consumers don't bother to learn about information use on the Web precisely because there is no harm from the way it is used.

In the case of tainted food, the harm is plain, even obvious. In the world of online privacy the source of the harm might not even be discernable by the average person. It might be months or even years before they discover any harm has been done. Not to mention that just because some knowledge about me isn't harmful doesn't mean I want it being gathered, tabulated and distributed.

8. Increasing privacy leads to greater safety and less risk. Again, I only agree with Paul to a point on this one. Information can be used to verify identity, raise flags on unusual behavior, and determine many, many things that can be used to target me specifically for all kinds of nifty products I don't want. But the amount of data needed to verify my identity is actually very small. The amount to tell if an unusual purchase is being made not much greater, and from a very specific, and in some ways, very limited source. At least when compared to all of my potentially trackable activity. I don't need to give up a ton of information to every website I visit to receive the benefits Paul mentions.

9. Restricting the use of information (such as by mandating consumer "opt-in") will benefit consumers. Paul asserts that "the use of information is generally benign and valuable," so such restrictions would be harmful. Generally benign? Maybe. Valuable? Always to the information gatherers, sometimes to the consumer. One thing I can agree with, opt-in would be harmful to the data gatherers, at least in the short term. Most people never change default settings, so to default to privacy when the norm has been gather data freely would be like damming the Colorado.

There is a difference in this case that might make people who would normally live with the defaults decide to opt-in. We are used to having sites recognize and remember us. There are a lot of little things that we've grown used to that would make people want to opt-in. So the information flow would be drastically reduced for a time, then would gradually increase to some point below where it was originally.

10. Targeted advertising leads people to buy stuff they don't want or need. Hmmm. Isn't the point of targeted advertising that it makes sure the ads served up are the ads the person would want to see? In addition, Paul makes good points that this fallacy shows a fundamental failure to understand how our economy works.

Paul Rubin has some very good points, but I think his point of view tends to focus on what's good for businesses more than what's good for people. Online privacy is a very complicated issue, and what is good for businesses may not be good for individuals. That said, it is a topic that needs more discussion from more points of view so that everyone concerned has input. Whatever privacy, online or off, looks like in ten, or even 5, years, it's going to be very different from what privacy was just 10 years ago. That doesn't have to be a bad thing, but it's not automatically going to be a good thing, despite what Paul Rubin has to say.

 

Online privacy is more complicated than many realize

Paul Rubin of the Wall Street Journal wrote a piece discussing 10 of the most dangerous myths about privacy online. I have to admit that I hadn't thought of a couple of them. Several of them are related, and if policy is decided based on any of them it could have serious effect on the way we experience life online. I'm going to look at the first five myths today, and the second five tomorrow.

Rubin's first five myths:

1. Privacy is free. It is not possible to gain more privacy without losing something. The more privacy, the less information available for websites to market to advertisers, the fewer targeted ads and the less targeted content, and the less efficiently websites can serve their customers. Information is the oil of marketing, as much on the web as in the real world. The less information, the less efficiently the engine runs.

2. If there are costs of privacy, they are borne by companies Facebook has made a business model out of proving this one wrong. The more information Facebook gathers, the more personalised the site can be for each user, the more able Facebook is to connect you to people you knwo, and the more targeted ads can be, so Facebook can offer advertisers blocks of users who match their target demographic extremely closely, which means more money to improve services for users. Total privacy isn't desirable, even if it was possible. As noted in myth 1, information is essential to the smooth running of the internet as we know it. But total sharing of information isn't desirable, either.

3. If consumers have less control over information, then firms must gain and consumers must lose. See above. Information makes it possible for businesses to tailor their online offerings to site visitors. Imagine Facebook if it didn't gather any information. The experience would be totally different, and it wouldn't have half a billion users.

4. Information use is "all or nothing" The assumption is that businesses will continue to operate and offer services even without the information they currently gather. That may be true, but services may suffer. I liked the example Paul used:

For example, search engines can better target searches if they know what searchers are looking for. (Google's "Did you mean . . ." to correct typos is a familiar example.) Keeping a past history of searches provides exactly this information. Shorter retained search histories mean less effective targeting.

We may not realize how much we rely on targeted searches, but I remember when searching for "black hair care" had porn sites for the first five results, and that was not what I was looking for.

5. If consumers have less privacy, then someone will know things about them that they may want to keep secret. I don't entirely agree with Paul on this one. He says:

Most information is used anonymously. To the extent that things are "known" about consumers, they are known by computers. This notion is counterintuitive; we are not used to the concept that something can be known and at the same time no person knows it. But this is true of much online information.

He's right, to a point. But it has been shown several times that it is impossible to truly anonymize personal information and still have it be useful, and even anonymized information can be used to find someone. All it takes is a birthdate, gender and zip to allow most people to be identified. Gender plus age plus zip code will narrow it down within a few hundred people or less. And social networks allow individuals to put any and everything about themselves online for the world to see. There does need to be some regulation. Privacy is important. So is a businesses ability to gather information about it's customers so it can better serve them. But there should be a limit to what businesses can gather.

See the next five myths tomorrow.

Online Safety: Remember what your mother told you

It's not often you see someone saying the same things you would do to protect yourself "in the real world" apply in the virtual world, too. US CERT Cyber Security Tip ST05-014, "Real world warnings keep you safe online" uses some old sayings to demonstrate that very point: 

    * Don't trust candy from strangers - Anyone can post anything on the internet, so don't accept anything as truth until you've verified it. Watch out for spam and phishing emails - and remember that email addresses and URL's can be spoofed. Make sure you know where you're information is coming from.

    * If it sounds too good to be true, it probably is - How many times have you seen an add on a page or a pop-up window proclaiming that you are the 1,000,000th visitor to a site? All you had to do was give them some information to claim your prize! How many emails have you received claiming to have millions just waiting for you to claim them? This type of scam predates email by decades. Don't let greed get the better of you. You're more likely to hit the jackpot on every lotto drawing for a month than you are to actually recieve money (or anything good) from one of these scams, or their cousins, the "let us scan your computer" popup.

    * Don't advertise that you are away from home - Autoresponders, the email auto replies you can setup for when you're away from your desk, are a wonderful thing. But don't give any more information than absolutely necessary. "I will be in training all week and will be able to answer email sporadically, if at all" is probably ok. "On vacation in Aruba from 9-12 to 9-24! Woohoo!" isn't.

    * Lock up your valuables - If someone can access your computer they may be able to access or steal personal information. Maybe even information you didn't realize was on your computer. Usernames and passwords, bank account information, all kinds of things that can either give them access to things you don't want them to have, or things that will allow them to figure our what you might use as a username or password and gain access to things you don't want them to have.

* Have a backup plan - Regular backups help recover from data loss caused by successful attacks, hardware failure, carelessness or accidents. They can also help you determine what kind of damage may have been done. Unfortunately, if a successful attack isn't discovered for a long time backups may be compromised, too.

Some other usefull CERT articles:

Using Caution with Email Attachments

Avoiding Social Engineering and Phishing Attacks

Reducing Spam, Identifying Hoaxes and Urban Legends

Recognizing and Avoiding Spyware


Windows DLL vulnerability: Bad Mojo

On August 23rd Microsoft released Microsoft Security Advisory (2269637): Insecure Library Loading Could Allow Remote Code Execution. It seems that if a program doesn't properly specify the path for a Dynamic-Link Library (DLL) an attacker can create a bogus DLL that will be loaded instead when a program tries to load the DLL. That DLL can contain code to install software, delete files, or do anything the user can do - or the program calling the DLL can do, if it has greater priveleges than the user.

What makes this bad mojo? This attack works over the network, so it can be performed by making a file available online that will cause a program on your computer to launch and call the bogus DLL. The attacker doesn't actually have to put anything on your machine, he only has to get you to open a remote file and if you are logged into your computer as an administrator (most home users and many small business users are) your computer is no longer yours.

This exploit works over networking protocols. The advisory specifically names WebDAV and SMB, but it will probably work over NFS (Suns Network File System) and AFP (Apple File Protocol) and maybe FTP as well. 

Microsoft is providing workarounds to negate the problem, but is relying on the software companies to supply patches to their software to solve this problem. If Microsoft were to patch this issue on the operating system lot of programs would break, so at the time I write this they are not going to issue a patch. Here are the workarounds Microsoft is offering:

Mitigating Factors and Suggested Actions

Mitigating Factors

Mitigation refers to a setting, common configuration, or general best-practice, existing in a default state, that could reduce the severity of exploitation of this issue. The following mitigating factors may be helpful in your situation:

This issue only affects applications that do not load external libraries securely. Microsoft has previously published guidelines for developers that recommend alternate methods to load libraries that are safe against these attacks.

For an attack to be successful, a user must visit an untrusted remote file system location or WebDAV share and open a document from this location that is then loaded by a vulnerable application.

The file sharing protocol SMB is often disabled on the perimeter firewall. This limits the possible attack vectors for this vulnerability.

Workarounds

Workaround refers to a setting or configuration change that does not correct the underlying issue but would help block known attack vectors before a security update is available. Microsoft has tested the following workarounds and states in the discussion whether a workaround reduces functionality:

Disable loading of libraries from WebDAV and remote network shares

Note This workaround requires installation of the tool described in Microsoft Knowledge Base Article 2264107.

Microsoft has released a tool which allows customers to disable the loading of libraries from remote network or WebDAV shares. This tool can be configured to disallow insecure loading on a per-application or a global system basis.

Customers who are informed by their vendor of an application being vulnerable can use this tool to help protect against attempts to exploit this issue.

Disable the WebClient service

Disabling the WebClient service helps protect affected systems from attempts to exploit this vulnerability by blocking the most likely remote attack vector through the Web Distributed Authoring and Versioning (WebDAV) client service. After applying this workaround it is still possible for remote attackers who successfully exploit this vulnerability to cause Microsoft Office Outlook to run programs located on the targeted user's computer or the Local Area Network (LAN), but users will be prompted for confirmation before opening arbitrary programs from the Internet.

To disable the WebClient Service, follow these steps:

1.

Click Start, click Run, type Services.msc and then click OK.

2.

Right-click WebClient service and select Properties.

3.

Change the Startup type to Disabled. If the service is running, click Stop.

4.

Click OK and exit the management application.

Impact of workaround. When the WebClient service is disabled, Web Distributed Authoring and Versioning (WebDAV) requests are not transmitted. In addition, any services that explicitly depend on the Web Client service will not start, and an error message will be logged in the System log. For example, WebDAV shares will be inaccessible from the client computer.

How to undo the workaround.

To re-enable the WebClient Service, follow these steps:

1.

Click Start, click Run, type Services.msc and then click OK.

2.

Right-click WebClient service and select Properties.

3.

Change the Startup type to Automatic. If the service is not running, click Start.

4.

Click OK and exit the management application.

Block TCP ports 139 and 445 at the firewall

These ports are used to initiate a connection with the affected component. Blocking TCP ports 139 and 445 at the firewall will help protect systems that are behind that firewall from attempts to exploit this vulnerability. Microsoft recommends that you block all unsolicited inbound communication from the Internet to help prevent attacks that may use other ports. For more information about ports, see the TechNet article, TCP and UDP Port Assignments.

Impact of workaround. Several Windows services use the affected ports. Blocking connectivity to the ports may cause various applications or services to not function. Some of the applications or services that could be impacted are listed below:

Applications that use SMB (CIFS)

Applications that use mailslots or named pipes (RPC over SMB)

Server (File and Print Sharing)

Group Policy

Net Logon

Distributed File System (DFS)

Terminal Server Licensing

Print Spooler

Computer Browser

Remote Procedure Call Locator

Fax Service

Indexing Service

Performance Logs and Alerts

Systems Management Server

License Logging Service

How to undo the workaround. Unblock TCP ports 139 and 445 at the firewall. For more information about ports, see TCP and UDP Port Assignments.

Additional Suggested Actions

Install updates from third-party vendors that address insecure library loading

Third-party vendors may release updates that address insecure library loading in their products. Microsoft recommends that customers contact their vendor if they have any questions whether or not a specific application is affected by this issue, and monitor for security updates released by these vendors.

Protect Your Computer

We continue to encourage customers to follow our Protect Your Computer guidance of enabling a firewall, getting software updates and installing antivirus software. Customers can learn more about these steps by visiting Protect Your Computer.

For more information about staying safe on the Internet, visit Microsoft Security Central.

Keep Windows updated

All Windows users should apply the latest Microsoft security updates to help make sure that their computers are as protected as possible. If you are not sure whether your software is up to date, visit Windows Update, scan your computer for available updates, and install any high-priority updates that are offered to you. If you have Automatic Updates enabled, the updates are delivered to you when they are released, but you have to make sure you install them.

I hope you find this information helpful. Remember to be careful what you click on.

Flash drive pierces Pentagon security

Have you ever worked at a place that had a strict policy against any kind of portable external storage like USB hard drives or thumb drives? Maybe they even had group policies in place that disabled external storage on the USB ports? There's a good reason for that. Tim Greene of Businessweek reports that a successful 2008 attack on U.S. military networks was accomplished by sticking a flash drive into a laptop in the Middle East that later connected to a military network and infected computers on networks at Central Command - both classified and unclassified networks - networks that in theory shouldn't have any direct connection. But what spreads by flash drive once can spread that way again.

The breach prompted a change in Pentagon policies, which is good. But why were flash drives allowed in the first place? I can see the original laptop, but unless a computer with two network cards had one connecting to a non-classifed network and one to a classified network (another bad idea), infection by flash drive or other removable media is the most likely attack vector to move the malware between the two types of network, and that should never have been allowed.

The attack gave unprecedented access into the mind of the U.S. military to an enemy. Whoever the attacker was, they had the ability to see and even change battleplans and orders. The possible harm they could have done is mind boggling, and that the attack succeeded the way it did is more than a little scary. Pentagon policies have changed, but have they changed enough? I hope so.