Monday, September 26, 2011

Logging out of Facebook is not enough



This is a repost of Nik Cubrilovic's blog of September 25th, 2011

#

Dave Winer wrote a timely piece this morning about how Facebook is scaring him since the new API allows applications to post status items to your Facebook timeline without a users intervention. It is an extension of Facebook Instant and they call it frictionless sharing. The privacy concern here is that because you no longer have to explicitly opt-in to share an item, you may accidentally share a page or an event that you did not intend others to see.
The advice is to log out of Facebook. But logging out of Facebook only de-authorizes your browser from the web application, a number of cookies (including your account number) are still sent along to all requests to facebook.com. Even if you are logged out, Facebook still knows and can track every page you visit. The only solution is to delete every Facebook cookie in your browser, or to use a separate browser for Facebook interactions.
Here is what is happening, as viewed by the HTTP headers on requests to facebook.com. First, a normal request to the web interface as a logged in user sends the following cookies:
Note: I have both fudged the values of each cookie and added line wraps for legibility
Cookie:
datr=tdnZTOt21HOTpRkRzS-6tjKP; 
lu=ggIZeheqTLbjoZ5Wgg; 
openid_p=101045999; 
c_user=500011111; 
sct=1316000000; 
xs=2%3A99105e8977f92ec58696cf73dd4a32f7; 
act=1311234574586%2F0
The request to the logout function will then see this response from the server, which is attempting to unset the following cookies:
Set-Cookie:
_e_fUJO_0=deleted; expires=Thu, 01-Jan-1970 00:00:01 GMT; path=/; domain=.facebook.com; httponly
c_user=deleted; expires=Thu, 01-Jan-1970 00:00:01 GMT; path=/; domain=.facebook.com; httponly
fl=1; path=/; domain=.facebook.com; httponly
L=2; path=/; domain=.facebook.com; httponly
locale=en_US; expires=Sun, 02-Oct-2011 07:52:33 GMT; path=/; domain=.facebook.com
lu=ggIZeheqTLbjoZ5Wgg; expires=Tue, 24-Sep-2013 07:52:33 GMT; path=/; domain=.facebook.com; httponly
s=deleted; expires=Thu, 01-Jan-1970 00:00:01 GMT; path=/; domain=.facebook.com; httponly
sct=deleted; expires=Thu, 01-Jan-1970 00:00:01 GMT; path=/; domain=.facebook.com; httponly
W=1316000000; path=/; domain=.facebook.com
xs=deleted; expires=Thu, 01-Jan-1970 00:00:01 GMT; path=/; domain=.facebook.com; httponly
To make it easier to see the cookies being unset, the names are in italics. If you compare the cookies that have been set in a logged in request, and compare them to the cookies that are being unset in the logout request, you will quickly see that there are a number of cookies that are not being deleted, and there are two cookies (locale and lu) that are only being given new expiry dates, and three new cookies (W, fl, L) being set.
Now I make a subsequent request to facebook.com as a 'logged out' user:

Cookie:
datr=tdnZTOt21HOTpRkRzS-6tjKP; 
openid_p=101045999; 
act=1311234574586%2F0; 
L=2; 
locale=en_US; 
lu=ggIZeheqTLbjoZ5Wgg; 
lsd=IkRq1; 
reg_fb_gate=http%3A%2F%2Fwww.facebook.com%2Findex.php%3Flh%3Dbf0ed2e54fbcad0baaaaa32f88152%26eu%3DJhvyCGewZ3n_VN7xw1BvUw; 
reg_fb_ref=http%3A%2F%2Fwww.facebook.com%2Findex.php%3Flh%3Dbf0ed2e54fbcad0b1aaaaa152%26eu%3DJhvyCGewZ3n_VN7xw1BvUw
The primary cookies that identify me as a user are still there (act is my account number), even though I am looking at a logged out page. Logged out requests still send nine different cookies, including the most important cookies that identify you as a user

This is not what 'logout' is supposed to mean - Facebook are only altering the state of the cookies instead of removing all of them when a user logs out.
With my browser logged out of Facebook, whenever I visit any page with a Facebook like button, or share button, or any other widget, the information, including my account ID, is still being sent to Facebook. The only solution to Facebook not knowing who you are is to delete all Facebook cookies.
You can test this for yourself using any browser with developer tools installed. It is all hidden in plain sight.

An Experiment

This brings me back to a story that I have yet to tell. A year ago I was screwing around with multiple Facebook accounts as part of some development work. I created a number of fake Facebook accounts after logging out of my browser. After using the fake accounts for some time, I found that they were suggesting my real account to me as a friend. Somehow Facebook knew that we were all coming from the same browser, even though I had logged out.
There are serious implications if you are using Facebook from a public terminal. If you login on a public terminal and then hit 'logout', you are still leaving behind fingerprints of having been logged in. As far as I can tell, these fingerprints remain (in the form of cookies) until somebody explicitly deletes all the Facebook cookies for that browser. Associating an account ID with a real name is easy - as the same ID is used to identify your profile.
Facebook knows every account that has accessed Facebook from every browser and is using that information to suggest friends to you. The strength of the 'same machine' value in the algorithm that works out friends to suggest may be low, but it still happens. This is also easy to test and verify.
I reported this issue to Facebook in a detailed email and got the bounce around. I emailed somebody I knew at the company and forwarded the request to them. I never got a response. The entire process was so flaky and frustrating that I haven't bothered sending them two XSS holes that I have also found in the past year. They really need to get their shit together on reporting privacy issues, I am sure they take security issues a lot more seriously.

The Rise of Privacy Awareness

10-15 years ago when I first got into the security industry the awareness of security issues amongst users, developers and systems administrators was low. Microsoft Windows and IIS were swiss cheese in terms of security vulnerabilities. You could manually send malformed payloads to IIS 4.0 and have it crash with a stack or heap overflow, which would usually lead to a remote vulnerability.
A decade ago the entire software industry went through a reformation on awareness of security principles in administration and development. Microsoft re-trained all of their developers on buffer overflows, string formatting bugs, off-by-one bugs etc. and audited their entire code base. A number of high-profile security incidents raised awareness, and today vendors have proper security procedures, from reporting new bugs to hotfixes and secure programming principles (this wasn't just a Microsoft issue - but I had the most experience with them).
Privacy today feels like what security did 10-15 years ago - there is an awareness of the issues steadily building and blog posts from prominent technologists is helping to steamroll public consciousness. The risks around privacy today are just as serious as security leaks were then - except that there is an order of magnitude more users online and a lot more private data being shared on the web.
Facebook are front-and-center in the new privacy debate just as Microsoft were with security issues a decade ago. The question is what it will take for Facebook to address privacy issues and to give their users the tools required to manage their privacy and to implement clear policies - not pages and pages of confusing legal documentation, and 'logout' not really meaning 'logout'.

Update: Contact with Facebook

To clarify, I first emailed this issue to Facebook on the 14th of November 2010. I also copied the email to their press address to get an official response on it. I never got any response. I sent another email to Facebook, press and copied it to somebody I know at Facebook on the 12th of January 2011. Again, I got no response. I have copies of all the emails, the subject lines were very clear in terms of the importance of this issue.
I have been sitting on this for almost a year now. The renewed discussion about Facebook and privacy this weekend prompted me to write this post.

Update 2: Followup

The reaction to this story has been amazing. I am writing a followup that will analyze both the data that I have collected as well as the response from Facebook (which you can read below in the comments). If you wish to view the raw logs, I have saved them here. Specifically the datr and lu cookies are retained after logout and on subsequent requests, and the a_user cookie, which contains your userid, is only cleared once the session is restarted. Most importantly, connection state is retained through these HTTP connections. There is never a clean break between a logged in session and a logged out session - but I will have more on that in a follow-up post.
Erratum: I refer to the wrong cookie name in the post above. I also say 'all sites' can be tracked, when I meant to say 'all sites that integrate facebook'.
Comments

Monday, August 1, 2011

Private browsing really isn't

Originally published 3/29/11 on lubbockonline.com/glasshouses


Do you use the private browsing feature of your browser? Though they may have different names for it, the major browsers all have some type of private browsing available. All of them do pretty much the same thing. From the description of Private Browsing in Opera:

Private tabs

To browse without leaving any trace of the websites you visit, you can use a private tab. This is especially useful if you are using someone else's computer, or planning a surprise that you want to keep secret. When you close a private tab, the following data related to the tab is deleted:

  • browsing history
  • items in cache
  • cookies
  • logins


It looks really good - but your browser isn't the only thing gathering info about you on the web. The explanation given on Google Chrome's private browsing page is pretty clear:

Browsing in incognito mode only keeps Google Chrome from storing information about the websites you've visited. The websites you visit may still have records of your visit. Any files saved to your computer will still remain on your computer.

For example, if you sign into your Google Account on http://www.google.com while in incognito mode, your subsequent web searches are recorded in your Google Web History. In this case, to prevent your searches from being stored in your Google Account, you'll need to pause your Google Web History tracking.


If you're using private browsing it will protect you from people finding out what you're doing online by checking your browser, but it won't protect you from the data and logs kept by the your ISP, the ous servers your data travels through, and of course, the sites you visit. Private browsing isn't really private except on the computer the browser is running on.

Killeen ISD student records found "blowing in the wind"

Originally published 3/28/11 on lubbockonline.com/glasshouses


Andy Ross of the Killeen Daily Herald reports that Killeen Independent School District documents containing students identifying information, including Social Security numbers, were found "blowing in the wind."

According to a school district spokesperson, the school district doesn't have policies on shredding documents. It hasn't used Social Security numbers to identify students since 2008, so these documents may be older than that. Not that it matters, since about the only way you can change your Social Security number is to go into the Witness Protection program.

The school district does have guidelines regarding personal information on staff and students, but if it doesn't include shredding documents before disposal it doesn't mean much. Dumpster diving is still one of the best ways to get information on individuals or businesses - and apparently these records weren't even in a dumpster.

There are state and federal laws covering the use of student data. I suspect some of them may have been broken here, but whether it was the school or someone they payed to dispose of the records I have no idea.

I wonder what policies and procedures LISD has in place to protect and properly dispose of student records? I hope that LISD's policies are more comprehensive and better enforced that those in Killeen.

Encrypt your Facebook sessions to protect data when it takes the scenic route through China

Originally published 3/25/11 on lubbockonline.com/glasshouses


CIO Online reports that Facebook traffic coming from AT&T servers was accidentally routed through China and North Korea. This might not be a concern, but unless you're connecting to Facebook using an encrypted connection everything that you do can be monitored by network operators. China is known for spying on it's users, and once your data is on the Chinese network, it's just like any Chinese users data. Any data you look at on Facebook could be monitored and/or saved for later analysis as it goes through China.

But if you encrypt your data, the network operators can't see it. Encrypting your login to Facebook is easy. Just make sure your Facebook bookmark is set to "https://www.facebook.com" and everytime you login your username and password will be encrypted. But once you login Facebook defaults back to an unencrypted connection. Facebook does realize that you may want to have everything you do on Facebook encrypted, and have a setting to allow that. Go to the 'Account' menu,select 'Account Settings' and scroll down to 'Account Security' then click on 'change'. Check the "Browse Facebook on a secure connection (https) whenever possible" box.

It's almost always a good idea to use encryption on the web. It doesn't use much processing overhead and protects your information as it goes from point 'A' to point 'B'. If you use Firefox there's even an add-on called "https everywhere" that will use https to connect to any website that support https.


Photobucket

Facebook + Separation + defriend = Jail Time?

Originally published 3/24/11 on lubbockonline.com/glasshouses


Ben Muessig at AOL.com reports on another case of someone shooting themselves in the foot on Facebook. The headline says it all: "Man Charged with Poligamy after defriending his first wife on Facebook."Richard Leon Barton, Jr became estranged from his first wife in prison. They hooked up again on Facebook after he got out.

That's fine, but then Richard defriended his wife. But he didn't have his privacy settings locked down, so she was able to see the pictures he posted of him and his second wife.

Oops. He hadn't divorced wife #1 yet.

Sunday, July 31, 2011

Encrypt your Facebook sessions to protect data when it takes the scenic route through China

Originally published 3/25/11 on lubbockonline.com/glasshouses


Photobucket

CIO Online reports that Facebook traffic coming from AT&T servers was accidentally routed through China and North Korea. This might not be a concern, but unless you're connecting to Facebook using an encrypted connection everything that you do can be monitored by network operators. China is known for spying on it's users, and once your data is on the Chinese network, it's just like any Chinese users data. Any data you look at on Facebook could be monitored and/or saved for later analysis as it goes through China.

But if you encrypt your data, the network operators can't see it. Encrypting your login to Facebook is easy. Just make sure your Facebook bookmark is set to "https://www.facebook.com" and everytime you login your username and password will be encrypted. But once you login Facebook defaults back to an unencrypted connection. Facebook does realize that you may want to have everything you do on Facebook encrypted, and have a setting to allow that. Go to the 'Account' menu,select 'Account Settings' and scroll down to 'Account Security' then click on 'change'. Check the "Browse Facebook on a secure connection (https) whenever possible" box.

It's almost always a good idea to use encryption on the web. It doesn't use much processing overhead and protects your information as it goes from point 'A' to point 'B'. If you use Firefox there's even an add-on called "https everywhere" that will use https to connect to any website that support https.

Facebook + Separation + defriend = Jail Time?

Originally published 3/24/11 on lubbockonline.com/glasshouses

Ben Muessig at AOL.com reports on another case of someone shooting themselves in the foot on Facebook. The headline says it all: "Man Charged with Poligamy after defriending his first wife on Facebook."Richard Leon Barton, Jr became estranged from his first wife in prison. They hooked up again on Facebook after he got out.

That's fine, but then Richard defriended his wife. But he didn't have his privacy settings locked down, so she was able to see the pictures he posted of him and his second wife.

Oops. He hadn't divorced wife #1 yet.

Computer Myths: 5 myths and the truth about them.

Originally published 3/23/11 on lubbockonline.com/glasshouses


The U.S.-CERT site is an excellent resource for information on computer security. It provides information at two levels, technical and non-technical. One of the articles is a list of common myths and the truth about them. I've provided the link, but here's the list of myths and the truths about them:

What are some common myths, and what is the truth behind them?

  • Myth: Anti-virus software and firewalls are 100% effective.

    Truth: Anti-virus software and firewalls are important elements to protecting your information (see Understanding Anti-Virus Software and Understanding Firewalls for more information). However, neither of these elements are guaranteed to protect you from an attack. Combining these technologies with good security habits is the best way to reduce your risk.

  • Myth: Once software is installed on your computer, you do not have to worry about it anymore.

    Truth: Vendors may release updated versions of software to address problems or fix vulnerabilities (see Understanding Patches for more information). You should install the updates as soon as possible; some software even offers the option to obtain updates automatically. Making sure that you have the latest virus definitions for your anti-virus software is especially important.

  • Myth: There is nothing important on your machine, so you do not need to protect it.

    Truth: Your opinion about what is important may differ from an attacker's opinion. If you have personal or financial data on your computer, attackers may be able to collect it and use it for their own financial gain. Even if you do not store that kind of information on your computer, an attacker who can gain control of your computer may be able to use it in attacks against other people (see Understanding Denial-of-Service Attacks and Understanding Hidden Threats: Rootkits and Botnets for more information).

  • Myth: Attackers only target people with money.

    Truth: Anyone can become a victim of identity theft. Attackers look for the biggest reward for the least amount of effort, so they typically target databases that store information about many people. If your information happens to be in the database, it could be collected and used for malicious purposes. It is important to pay attention to your credit information so that you can minimize any potential damage (see Preventing and Responding to Identity Theft for more information).

  • Myth: When computers slow down, it means that they are old and should be replaced.

    Truth: It is possible that running newer or larger software programs on an older computer could lead to slow performance, but you may just need to replace or upgrade a particular component (memory, operating system, CD or DVD drive, etc.). Another possibility is that there are other processes or programs running in the background. If your computer has suddenly become slower, it may be compromised by malware or spyware, or you may be experiencing a denial-of-service attack (see Recognizing and Avoiding Spyware and Understanding Denial-of-Service Attacks for more information).

I especially like that last one. The newest computer in my house is 4 years old and runs everything from online games (free version of D&D) to streaming HD video. Just because a computer is a few years old doesn't mean it's obsolete. But a suddenly slow computer could be, and probably is, infected with malware. These days if you're running your computer as admin you may not be able to get rid of the malware without wiping the computer. So if you can install software on the account you surf the web with, create a new standard user account and start using it. You can usually remove any malware that gets installed in a normal user account. If it installs into an admin account you'll have to wipe the computer to be sure.

Saturday, July 30, 2011

New technology will measure your reaction to advertising

Originally published 3/22/11 on lubbockonline.com/glasshouses


I read a lot of science fiction, so I can see all kinds of bad futures for this one. Larry Dignan at ZDNet reports on a new technology: Neuromarketing. Neuomarketing is the creation of Neurofocus, a company that claims to develop advertising based on neuroscience. If you don't know (I didn't), neuroscience is an interdisciplinary science involving several fields including chemistry, computer science, and psychology, to name a few.

Neurofocus has a device called the Mynd that is basically a consumer friendly personal wireless EEG. It monitors your response to advertising - not just what you tell them your response is, but how you really react. Larry covered a few of the highlights of what it does:

I can see advertising folks drooling now. The aim for Mynd is to capture real responses from consumers who would participate in home panels. Mynd would send data to a mobile device that would capture reactions. Among the key details:
* Mynd has dense-array medical grade electroencephalographic (EEG) sensors.
* The device captures brainwave activity across the full cortex and can connect to mobile devices via Bluetooth.
* The sensors are dry so there are no gels to burden consumers.
* Mynd has been in testing and development for three years and will roll out to labs in the U.S., Europe, Asia Pacific, Latin America and the Middle East.
Dr. A. K. Pradeep, CEO of NeuroFocus, said Mynd can enable “neuromarketing” to gain “critical knowledge and insights into how consumers perceive their brands, products, packaging, in-store marketing, and advertising at the deep subconscious level in real time.”

The potential of this device is frightening - but at this point it's not a very big concern. Unless you agree to put on the headset it's not going to affect you. But if real privacy lawss aren't passed soon this may become the next big privacy fight. Even if the technology becomes miniaturized enough to fit in a baseball cap or a hoodie it may not be a big deal, if you have to agree to transmitting your data. But your data can be read without your permission, this will be major privacy issue. If the technology reaches the point it can scan from a distance it could become a big deal. We're already in a fight over who controls our personal data online. You don't get much more personal than your brainwaves.

 

Tell your Congressman, don't cut Social Security IT upgrade funding

Originally published 3/18/11 on lubbockonline.com/glasshouses


Anyone who has read my comments on "Lubbock Left" and "Mr. Conservative" knows I am not a huge supporter of Social Security. But just because I don't think it's Uncle Sam's job to take care of me and mine doesn't mean I'm oblivious to the reality of the situation. And that reality is that our Social Security system is residing in a data center that is decades old with a backup system that may or may not work, and will take five days to bring online even if it does.

I read about the problem in the print edition of Information Week for March 14, 2011. But the article by J. Nicholas Hoover is available online. The gist is that the data center is extremely old with inadequate heating and cooling, poor power with inadequate backup power and an unreliable backup of data and processing. The software is badly outdated and not up to the needs of a modern enterprise.

The plans and financing are in place - but the money may dry up:

Most of the funding for the new data center will come from $500 million made available through the American Recovery and Reinvestment Act of 2009. However, the Republican-controlled House of Representatives' revised budget for the rest of fiscal 2011 would cut $120 million of that stimulus funding. If that happens, one of the first things to go could be $100 million in software and system upgrades planned for the new data center.

Millions of people rely on the Social Security system for money to survive. The system is one lightning strike from disaster. Or one mouse shorting a circuit. If the primary system goes down the backup could take 5 days to bring online. In five days people could - probably will - die. I may not think much of Social Security, but the system is in place, and we have to make sure it doesn't fail. For too many people it's their only safety net. Write your congressman not to cut any of the funding for the Social Security data center upgrade.

Cord Blood Registry suffers breach

Originally published 3/17/11 on lubbockonline.com/glasshouses


Last month Scamsafe.com reported that Cord Blood Registry (CBR), a company that stores umbilical cord for future use, suffered a data breach in December of 2010:

A CBR computer and data backup tapes were stolen from an employee's locked automobile. The stolen tapes contained customer names, Social Security numbers, driver's licenses and/or credit card numbers. This is the "mother load" of personal identifying information for identity thieves.

This is a pretty serious breach, and a good (sic) example of how not to handle any type of data, but especially sensitive customer data. The thief broke into the car through the window. Never leave your computer in the passenger compartment where it can be seen. Even if you've encrypted the data, which CBR didn't do. It's even more tempting to some thieves than a purse.

Because unencrypted customer data was kept on the seat of a car 300,000 people are at risk for identity theft. If this was the first time this had happened it might be understandable. But there have been several widely publicized breaches involving stolen or lost laptops, including a breach more than 100 times the size of this one at the Department of Veteran Affairs. There is no excuse for a business allowing unencrypted data anywhere, but especially not on laptops or portable media.

Cars are hackable, too.

Originally published 3/16/11 on lubbockonline.com/glasshouses

Technology review reports that Tadayoshi Kono, Stefan Savage and a team of researchers are able to take control of cars computer systems remotely using smart phones. Well, as remote as a bluetooth signal will allow. It is important to note that the car they used was a mass production 2009 model. That means that it was one of the less computerized cars available. Of course, any car without bluetooth is safe from these attacks.
But in a car with bluethooth, not only was it possible to take control of the car using bluetooth, it was possible through several different attack vectors and with phones that weren't paired to the car. Once they had control they could take complete control of the cars computer systems. That means they could do everything from activating the GPS (how did you think Onstar tells 911 where you are) to disabling the brakes. With total control of the computer they could start or stop the engine, control the air and heat, and control the door locks, to name a few things. No one thinks these attacks are out in the wild, but it's past time for auto manufacturers to start including security in their computing software and hardware.

Researchers identify anonymous emails with 80-90% accuracy - I say not good enough

Originally published 3/14/11 on lubbockonline.com/glasshouses


At first glimpse it looks like a good thing. Researchers at Concordia University have devised a way to identify the authors of anonymous email. This is a great boon to prosecutors seeking to identify people using anonymous email accounts for illegal activity. Unlike an IP address, which can only be used to determine where an email was authored, this system will identify the author, and will do it with 80-90% accuracy.

Wait a minute. 80-90% accuracy is pretty good in some contexts, but in criminal cases? The reason for the research is sound:

“In the past few years, we’ve seen an alarming increase in the number of cybercrimes involving anonymous emails,” says study co-author Benjamin Fung, a professor of Information Systems Engineering at Concordia University and an expert in data mining – extracting useful, previously unknown knowledge from a large volume of raw data. “These emails can transmit threats or child pornography, facilitate communications between criminals or carry viruses.”

On an emotional level 80-90% seems pretty good, but is that good enough when you may be taking years from a persons life? In some cases, you could be taking their life. The case of Tim Coles is one the most prominent examples, both locally and nationally, of a person convicted on evidence that jurors thought was better than 90% accurate, but turned out to be 100% wrong. Further reading of the press release from Concordia shows that, once criminals become aware of this technique, 80-90% might be optimistic:

“Let’s say the anonymous email contains typos or grammatical mistakes, or is written entirely in lowercase letters,” says Fung. “We use those special characteristics to create a write-print. Using this method, we can even determine with a high degree of accuracy who wrote a given email, and infer the gender, nationality and education level of the author.”

So all I have to do to fool this system is to vary my writing style. Add intentionally misspell words in some emails, be meticulously correct in others. Make grammatical mistakes in some, not in others. Or just always make mistakes when using anonymous email that I don't usually make in my signed email.

Worse, given only 80-90% accuracy, how hard would it be for someone who receives a lot of email from me - or maybe even someone who reads this blog - to frame me using email? When it comes to criminal cases, 80-90% doesn't cut it.

Would you recognize a human-hacker?

Originally published 3/11/11 on lubbockonline.com/glasshouses


As much as we focus on computer viruses, trojans, vulnerabilities and exploits, they are not the biggest risk to security - online or off. The biggest risk is us. Books have been written about it, from Kevin Mitnick's classic "The Art of Deception: Controlling the Human Element of Security" to Christopher Hadnagy's latest, "Social Engineering: The Art of Human Hacking" the subject has been pretty thoroughly covered. But we don't have to space for that kind of detail, so we're going to look at a more succinct study, the Department of Homeland Security's pamphlet on elicitation, (pdf) the art of using ordinary conversation to coax out the information people want to keep secret. From the pamphlet:

In the espionage trade, elicitation is a technique frequently used by intelligence officers to subtly extract information about you, your work, and your colleagues.

Said another way, elicitation is the art of conversation honed by intelligence services to its finest edge.

Elicitation is nonthreatening, easy to disguise (and hard to prove) and it works. Why does it work? Because it's ordinary conversation, the type of thing we do all the time. Is that attractive person you just met so interested in your job because they want to get to know you, or because they're trying to find out something you know? That telemarketer that struck up a conversation with you yesterday - did you really tell him about your vacation plans next month? Just how did he get you to tell him that?

According to the DHS pamphlet the tools are something we all use to some degree:

Appeals to ego: "You must be really important. Everyone here seems to know you." You may respond with a denial, then talk about why what you do isn't really important.

Mutual interest: The person expresses an interest in something you're interested in and uses that to build a bond and increased trust.

Deliberate lies: "I've heard that..." A deliberate lie told knowing you know the truth. Most people have a strong desire to correct the mistake, and we all like to be part of the "in crowd" with insider knowledge.

Volunteering information: It's a simple trade. They give you something in hopes you will give them something. Sales people do this all the time, usually telling you that the price is about to go up, the offer is about to expire or their almost out and it's going to be weeks before they get more.  If it works, you buy whatever they're selling. For a scam artist, you give them your information, such as credit card numbers, name, address, and maybe even SS#.

Assumed knowledge: Just enough is said to give the impression of knowledge in an area so you'll discuss it.

As I read this list I thought about calls I'd received, both at work and at home, from telemarketers. Almost every one of these tools had been used against me in one form or another.  Then in the WalMart parking lot tonight another one was used on me, the appeal for help:

"Could you spare some change? I'm trying to get some food for me and my wife."

I've had my own answer to this type of appeal for years, "Come with me and I'll buy you some food." He said he was getting his wife, got in the passenger seat of a car a row over, and they left.

The DHS pamphlet is aimed at preventing espionage, but the same techniques are used by malware authors and conmen to build trust and encourage us to give them what they want. One reason these techniques are so effective is that they are the things we all do in the normal course of communicating with others. Try going through a day looking for the things you and the people you interact with do as you communicate. Then see if you can tell who is just making conversation and who is trying to get something from you.

 

 

It's easy to lose control of your creation online

Originally published 3/10/11 on lubbockonline.com/glasshouses




Noam Galai is a photographer who took some pictures of himself back in 2006 and posted them on Flickr. A few months later a friend mentions seeing his face on a t-shirt. He doesn't really believe her, but a couple of months later he's in a store and sees the shirt. The whole story is chronicled in blog post and 10 minute video interview by fstoppers titled "The Stolen Scream." It's a fascinating story, and seeing how far his image has travelled is amazing - and only one user payed him for it.

There are a lot of lessons here. The best may be Noam's reaction to this theft of his IP. He could have watermarked his images (he still doesn't). He could be sending lawyers after all of the companies using his image without permission. He's not - although he does admit that companies using his work without asking does bother him. But he's not bitter. He seems more amused than anything.

Another lesson, one pointed out by Lee, the fstoppers blogger, is that if we're honest, most of us have no right to point fingers at the people using Noams image without even acknowledging it's his. Most of us have downloaded music, or accepted a burned CD from a friend.

The last lesson, the title of this post, is that once you put something online you surrender control of that information to the world at large. So if you don't want the world to see something, don't post it online.

Facebook good and bad

Originally published 2/1/11 on lubbockonline.com/glasshouses


Facebook has become a centerpiece in many peoples lives, and that focus is showing in the stories generated by it's users. Here are some of the stories from the last few days:

A man in Rochester, NY, was stabbed by his girlfriend because of comments he made on Facebook. Wait, no, she stabbed him because he friended another woman.

Four teens in Naples, Fl are accused of making death threats on Facebook.

A doctor diagnosed a childs leukemia via Facebook.

A man in Columbia, Ill. is indicted for "enticing minors" on Facebook.

Facebook can be a boon or a bane. Be careful what you do there.

 

Suit opposing "nude scanners" will be heard Thursday

Originally published 3/8/11 on lubbockonline.com/glasshouses

The Threatlevel blog at Wired.com reports that the lawsuit filed by the Electronic Privacy Information Center will be heard by the U.S. Court of Appeals. At issue are potential health problems and the effectiveness of the scanners. The scanners were pushed into service over the objections of privacy advocates as well as the questions on their usefulness from other government organizations, such as the Government Accountability Office (GAO).

I hope these scanners are removed from service, but I doubt they will be. Too much money has been spent, and someone would have to take the fall for the security blunder. Even when a terrorist gets past the scanner the TSA won't admit they're ineffective. The agency will say that the terrorists are a wiley bunch who came up with new tactics to circumvent our almost air-tight security. Never mind that the tactics have been used by smugglers to get contraband into and out of countries for centuries.

Suit opposing "nude scanners" will be heard Thursday

The Threatlevel blog at Wired.com reports that the lawsuit filed by the Electronic Privacy Information Center will be heard by the U.S. Court of Appeals. At issue are potential health problems and the effectiveness of the scanners. The scanners were pushed into service over the objections of privacy advocates as well as the questions on their usefulness from other government organizations, such as the Government Accountability Office (GAO).

I hope these scanners are removed from service, but I doubt they will be. Too much money has been spent, and someone would have to take the fall for the security blunder. Even when a terrorist gets past the scanner the TSA won't admit they're ineffective. The agency will say that the terrorists are a wiley bunch who came up with new tactics to circumvent our almost air-tight security. Never mind that the tactics have been used by smugglers to get contraband into and out of countries for centuries.

Homeland Security sees the light. Or do they?

Originally published 3/7/11 on lubbockonline.com/glasshouses



Declan Mcullagh of CBS' Tech Talk blog reported that the Department of Homeland Security has extended the deadline for compliance with Real ID, the national ID passed by Congress in 2005, to 2013. Similar reports came from Fox News and CNN. This is good news to anyone who values privacy and recognizes that the Real ID initiative does much to make it easier to track citizens and little to actually stop terrorists.

But I don't know if the reports are true. I receive the press release feed from Homeland Security, and I never saw this release. Declan Mcullagh links to a pdf of the announcement at the Office of the Federal Register site, but the link is dead, and there is no other mention of the site. A search of the DHS website reveals no documents on Real ID mentioning an extension to 2013. 

If the deadline has been extended this is good news - but not really surprising. Several states have flatly refused to comply because of concerns over the initiative. Concerns go beyond privacy. The costs of implementing it are astronomical, the security benefits questionable, and the increase in the governments ability to probe into law abiding citizens lives unbelievable.  It was a bad idea with bad implementation from the start, and it needs to just go away.

Monday, July 25, 2011

Supreme Court: Corporate privacy does not trump Freedom of Information Act

Originally published 3/4/11 on lubbockonline.com/glasshouses


The Electronic Frontier Foundation (EFF) reports that the Supreme Court denied corporations the same privacy rights as individual citizens when the government is responding to Freedom of Information Act (FOIA) requests. This might seem like a no-brainer, but legally corporations are considered persons, so it was only a matter of time before a FOIA request came into conflict with a corporations 'personal' rights.


AT&T's lawyers argued that as a corporate citizen it was provided the same exemptions as a private citizen. A coalition of groups ranging from the EFF to the National Security Archive filed an Amicus brief explaining why corporations were not, and should not be, considered persons under FOIA. The Court obviously agreed with them. In agreeing with them, the Court picked apart the term "personal privacy," using definitions, precedents, and a little horse sense to overturn the lower courts decision. One of my favorite passages was the last paragraph of page 7 continuing onto page 8:


AT&T’s argument treats the term “personal privacy” assimply the sum of its two words: the privacy of a person.Under that view, the defined meaning of the noun “person,” or the asserted specialized legal meaning, takes on greater significance. But two words together may assume a more particular meaning than those words in isolation. We understand a golden cup to be a cup made of or resembling gold. A golden boy, on the other hand, is one who is charming, lucky, and talented. A golden opportunity is one not to be missed. “Personal” in the phrase “personal privacy” conveys more than just “of a person.” It suggests a type of privacy evocative of human concerns—not the sort usually associated with an entity like, say, AT&T.


The Supreme Court explains that the real meaning of a phrase can be more than the sum of it's parts, and shows that while a corporation may be a citizen on paper, it is not one in fact, and does not deserve the same privacy considerations as a living breathing person. They probably didn't need to go to all that trouble. As they explain, the FOIA already has protections for corporations. 


This was a good decision, and there was even some (perhaps ill advised) humor at the end. The concluding line of the decision said, "We trust that AT&T will not take it personally." While it seems obviously tongue in cheek to me, Lyle Denniston at the SCOTUSblog feels that the sentence contradicts the ruling. I would say he's just being contrary, but law is all about words, their meanings, and the way they're used in a document. That little joke could cause privacy advocates  and Supreme Court justices headaches in the future.