Monday, March 03, 2008

MORE on Privacy...

I was reading the IDS last week and came across a really interesting article about some new monitoring software that Microsoft is in the midst of developing. The new software (unnamed as of right now) will allow employers to monitor employees’ body temperature, heart and respiration rates, brain signals, blood pressure, and facial expressions. WOW. Microsoft explains that they are developing this software in order to alert managers if an employee seems to be depressed, over-worked, or stressed. Well, okay… but I’m sure that there are plenty of individuals in the corporate world that are depressed, over-worked, or stress. Does this mean that they can’t do well at their job, that they’re not going to function at maximum capacity?

Maybe, but not necessarily. How about the fact that just over a quarter of all Americans over the age of 18 suffer from a diagnosable mental disorder in a given year?* This is a lot more prevalent than a lot of people might think, and thus to have an employer be able to monitor some of your most personal information and find out if you’re suffering from depression may not be in anyone’s best interest.

One opponent of the monitoring software stated: “I can see how some employers might want to know their employees’ stress levels or something like that, but a good company would already have policies in place (to deal with those issues).” I just can’t imagine a workplace in which all of my biometric data was measured and monitored. Often times, if I’m having a bad day, I’d rather keep it to myself. I understand the idea that managers want to be alerted if their employees are over-worked or stressed, but I feel that these issues are something that each employee should take care of him/herself. If there is a problem, it should be up to the employee’s discretion to decide whether or not he/she wants to make an issue of it. Each company needs a structure where the employee is going to come and talk to you directly.

The article stated that this technology won’t appear in the workplace anytime soon, and IU law professor Fred Cate explained that the critical issue is how the software is going to be used. He makes a good argument: “It’s clear that it could have enormous potential for invading privacy, but so does lots of other technology that we use.”

So, any thoughts?



Blogger Ashley said...

Do you think employers want to monitor these health issues in order to determine health insurance benefits? If an employee is stressed out and needs medical attention, whether it ‘s drugs or a trip to the doctor, there is a chance the insurance will cover it. To keep costs down, they could be monitoring employees’ health in order to save on money so their insurance costs can be kept low. I agree with you about how there are sometimes I don’t want to tell people what’s wrong with me, but at the same time employers are looking at reducing costs to increase the bottom line. They are not going to want people that are always sick. This might also work for people who are always calling in sick. If the software can actually tell when a person is getting sick, it might help the company determine how many sick days should be given to their employees. It could also track employee’s progress and determine whether or not they are getting their work done even while they are sick. It’s just kind of creepy to think that a software program could do that, but I guess what isn’t possible these days?

1:40 PM  
Blogger Robin said...

I think Ashley and Lilly both bring up interesting points. There is a big possibility for abuse with the capabilities of this software.

One potential area of abuse concerns reliability of the software's results. There have been studies conducted to examine the accuracy of the polygraph test, the “lie detector”, and the test has been shown to be unreliable. For rules like the admissibility of evidence in court, lie detector results are out, but how can we expect employers to be held to the same standard? I can’t imagine that Microsoft will be able to ensure 100% accurate results from such a new, and risky piece of software. That begs the question, what if you became the victim of a computer glitch or a misreading and were falsely labeled as depressed? This false positive could potentially cost someone a job, or perhaps give an insurance company the reason they were looking for to deny an individual’s coverage. While we can ensure that the legal system won't allow for an individual's fate to be determined by an inaccurate test, I can't foresee any way of ensuring that this doesn't happen in the private sector without the legal system coming in to intervene. I’m not sure what should or will happen, but it is interesting to think about.

5:02 PM  

Post a Comment

Subscribe to Post Comments [Atom]

Links to this post:

Create a Link

<< Home