Author Archives: admin
Contactless payments are one of those ideas that make instant sense—in theory. Instead of forcing people to stand in line, waiting to talk to/deal with a cashier, businesses can use a scanner that can read whatever card or phone is waved in front of it, and allow the customer to go on his or her merry way. It's a technology that the major credit card companies are pushing, but it's not without its growing pains, as the battle continues between security experts who want more a more open approach to device security and companies who seek to hide their methods of securing such devices.
Despite the issues surrounding contactless payment deployment, Visa has taken a significant step forward in designing its version of this technology by unveiling its newest product: the Visa Micro Tag.
As shown above, the Microtag is a small device meant to attach to a keychain. The Micro Tag uses Visa's payWave system to conduct and verify the actual transaction. No number is imprinted on the device, which, according to Visa, is one of the Micro Tag's security features. According to Visa's Micro Tag homepage, Visa payWave will only activate once the tag is within 1-2 inches of the scanner, which will then indicate that the appropriate information is being processed through the "secure" Visa network. Users making purchases under $25 won't even have to sign a receipt.
There are, however, some practical concerns standing between you and your insta-purchase keychain. There are currently a number of contactless payment systems in use from various credit card and mobile phone companies. This alone is likely to make any store wary of upgrading its scanners to any single contactless payment system until compatibility with the major players can be guaranteed. Security researchers have also raised issues regarding both how the credit card numbers are transmitted and the fact that none of the companies developing contactless payment systems are willing to allow independent security developers to examine their systems. For the moment, it's not even clear whether Visa uses an encryption algorithm to communicate with the scanner or if such data is transmitted "in the clear." The company web site has little to say on the matter, noting only that "Visa Micro Tag is very secure, protected with the same multiple layers of security as traditional Visa cards."
Contactless payments are going to continue growing in the US—it's too good a concept to ignore. The big battles, then, are going to be fought over who controls the payment networks, how secure they are, and how wide a variety of devices can be supported by a single scanner unit. Hopefully someone will also come up with a way to simplify the keyfob end of the system—I can imagine carrying one these devices, but I'm really not sure I'd want one for Visa, MasterCard, and American Express all hanging off a single keychain.
With Windows Home Server just about ready to be released to the masses, this week Microsoft revealed the winners of the first Code2Fame Challenge—a contest dedicated to finding the most innovative Windows Home Server add-ins.
With the release of Windows Home Server, Microsoft has been doing all it can to promote the operating system as a fantastic development platform. The Software Development Kit has been geared to appeal to both hobbyists and professional developers with its simple but powerful APIs. With that in mind, the Redmond giant has been doing all it can to build up the small Windows Home Server development community. This past week, that community received a little more attention as the Microsoft-sponsored Code2Fame Challenge came to an end.
From June to August of this year, the Code2Fame Challenge was a contest open to developers in the United States and Canada. The goal was to see who could create the most interesting, useful, and innovative add-in for Windows Home Server. Besides notoriety, the winner of the contest would also receive $10,000. Second and third place finishers would get $5,000 and $1,000, respectively. The results of the contest, which were released Wednesday, were decided by a panel of "Home Server" experts including Ed Bott, Paul Thurrott, and Rob Enderle.
After receiving a variety of submissions, the judges awarded Andrew Grant the grand prize for Whiist, an add-in that allows users to easily create web pages on their Home Server *.com site simply through drag-and-drop actions. Once Whiist is installed, a "Website Management" tab is created on the Home Server Console. From there, a user can upload HTML documents (including ones from Word), edit pages, create photo albums, create new web sites, and set access restrictions. Grant's web site has a comprehensive overview of Whiist, including screenshots and tutorials.
The second and third place applications, while not nearly as impressive as Whiist, should still be useful for Home Server users. Finishing in second place, Jungle Disk uses Amazon.com's Simple Storage Service to backup Windows Home Server data remotely. Third place winner Community Feeds for Windows Home Server does just what its name implies: it uses RSS to deliver text, audio, and video content to Windows Home Server. Any Windows Media Connect-compatible device can then view the content, which opens up possibilities for creating custom feeds for your Xbox 360 or any other digital media receiver in your home.
Creating a Windows Home Server add-in is not overly difficult for those with a small amount of development experience. As long as the operating system is reasonably popular—and it should be, based on the feedback I've heard—the development community that focuses on it will continue to grow.
It is easy to get a fluid to move downhill relative to where it begins; getting it up hill can pose more of a challenge. Typically one would use a pump to pressurize the fluid so that it can over come the elevation difference. Before the advent of the various types of mechanical pumps, one could use an Archimedes' screw to move fluid from low lying areas to desired higher elevation destinations. It has also recently been demonstrated that water under a high voltage can defy gravity. Now, a new method gives a way to move fluids uphill without pumps or screws, just shakes. Research from a team of mathematicians from the University of Bristol demonstrates how one can move a droplet of fluid uphill simply by shaking the surface on which thefluid is resting.
When a droplet sits on an inclined surface, the force of gravity will pull it down. This typical response can be countered by a phenomena known as contact angle hysteresis—when the edge of the droplet on the downhill side contacts the surface in a different manner than the uphill edge. This can result in a capillary force that counteracts the force of gravity and holds the droplet in place. Philippe Brunet has shown that not only can the shape of a droplet hold it in place on an inclined plane but, by deforming the droplet through shaking, it can be made to roll up hill.
In a paper set to be published in an upcoming edition of Physical Review Letters, the researchers show that glycerol-water droplets can actually roll uphill when one vibrates the surface they are on. The researchers have a set of four movies illustrating various aspects of this phenomena available on the author's homepage. They propose that this motion is due to a combination of non-linear effects of friction between the fluid drop and the substrate, and a symmetry breaking during the acceleration cycle present during the shaking. In addition to simply moving uphill, the authors suggest that, by independently controlling the phase an amplitude of horizontal and vertical vibrations, one could force a droplet to move in an arbitrary path along a surface. This aspect the work could lead to improvements in microfluidic devices—in these devices control over where the fluid moves is of the utmost importance.
Physical Review Letters 2007, to be published
Sony clearly doesn't have an issue with trying a few different pricing levels for its flagship PlayStation 3. The PlayStation 3 debuted with both a $500 and $600 price tag, but since that time much has changed.
With sales staying modest, Sony initially nixed the $500 PS3 and then announced an 80GB unit, then they dropped the price of the 60GB unit, and then revealed that the 60GB unit was "clearance." This meant that there is no official "entry level" PS3, so we've been waiting for Sony to address that issue.
In the meantime, you can see what Sony has done: the company has focused on reducing the cost of building the PS3 while also closely watching how sales of lower-priced units are doing. The time is ripe for a new PS3 model to hit the scene, and we strongly believe that the company is about to launch a $399 PS3 in time for the holiday season. We've been hearing rumors to this effect for some time, but now the evidence of a new PlayStation 3 configuration is almost undeniable: an FCC filing details a new model number for the system.
What this new model number means is impossible to know for sure; the FCC filing leaves out pictures to "avoid premature release of sensitive information prior to marketing or release of the product to the public." The product description tells us that there is no difference in the wireless configuration, CPU, or Bluetooth aspects of this new PlayStation 3. The information that details the differences has been conveniently left out of the released paperwork, for the aforementioned reason.
So what does this mean? We know something new is coming, but everything else is open to speculation. Luckily we have sources in the industry who have long been telling us about an upcoming $399 40GB PlayStation 3. A $399 PlayStation 3 would be a great way to get new consumers into the Blu-ray enabled system for the holidays, and it would help to counter the Xbox 360's lower price and newly announced pack-in software.
We also have a date to pin this information to: our sources tell us that the $399 PlayStation 3 hardware will launch on, or before, November 16. We're confident in this information, as our sources in this area have always given us accurate information in the past. The "sensitive information" in the FCC filing will go public 45 days from September 4, unless something changes. We're confident saying that Sony is readying a new low-priced weapon for the console wars, regardless. Frankly, we also think it makes good sense.
Why didn't Sony announce this at the Tokyo Game Show? We reason that the company will hold off as long as possible on the announcement so as not to stymie existing sales.
The 700MHz spectrum set to be auctioned by the Federal Communications Commission next January is some of the most highly sought after bandwidth to be made available in years. One major wireless player may be left on the outside looking in when the bidding begins, however, if Frontline Wireless has its way.
In a complaint (PDF) filed with the FCC late last week, Frontline accused Verizon of violating the FCC's lobbying rules. Frontline wants the FCC to impose sanctions on Verizon, up to and including being barred from bidding on the beachfront spectrum that will hopefully become home to a new wireless broadband network.
Frontline is upset about a September 17 meeting between Verizon, FCC Chairman Kevin Martin, the FCC's Wireless Bureau Chief, and a handful of other FCC staffers. After the meeting, Verizon filed an ex parte letter with the Commission that provided only a brief, one-sentence description of the event. Frontline calls the brief description an "arrogant violation" of the FCC's requirement that firmsdisclose the "summary of the substance" of their meetings with the FCC to other interested parties during ongoing proceedings (in this case, the rule-making process for the 700MHz auction).
The FCC later instructed Verizon to make a more detailed filing covering the substance of the discussions, which Verizon did on September 25 with an additional one-paragraph description. To no one's surprise, Verizon used the meeting to rehash its opposition to the open access rules adopted by the FCC for the spectrum auction. Indeed, Verizon has already sued the FCC in an attempt to get a federal court to overturn what the telecom describes as the "arbitrary" and "capricious" rules.
Like Verizon, Frontline didn't get what it wanted from the FCC during the rule-making process, either. The company had pitched a plan to the Commission under which the winner of the auction for 10MHz of the available spectrum would also be given half of the 24MHz spectrum allotted for public safety use. The company guaranteed it would build out the system within 10 years and promised it would reach 99 percent of all Americans.
Instead, the FCC decided to pair two separate 5MHz blocks (Block D) with larger blocks of spectrum already reserved for public safety use. Under the FCC's Public Safety/Private Partnership, the winning bidder(s) for the 5MHz blocks will need to build a national network good enough to meet coverage and redundancy requirements. The 10MHz public safety and 5MHz blocks can then be combined to operate a commercial network, but public safety traffic will get priority on the network.
White blocks indicate available 700MHz spectrum. Data source: FCC
Last week, Frontline asked the FCC to reconsider some of the rules, including what it described as the FCC's "capricious" $1.6 billion reserve prices for the 5MHz public safety blocks. The company also wants the likes of AT&T and Verizon barred from controlling more than 45MHz of the available spectrum in order to avoid "unacceptable anticompetitive effects."
Getting the FCC to bar Verizon from bidding in the upcoming auction would go a long way towards accomplishing Frontline's goal of keeping the large telecoms from monopolizing the spectrum. In the likely event that the FCC decides against preventing Verizon from bidding, Frontline helpfully attached a list of other possible sanctions, including fines and/or barring Verizon from further participation in the FCC's rule-making process.
If you didn't heed our analysis and Apple's subsequent warning about the new firmware's effects on unlocked iPhones, or if you simply drew the short straw to be the firmware upgrade guinea pig amongst your unlocked iPhone owning friends, relief is in sight. New techniques have appeared for both downgrading your iPhone's firmware back to 1.0.2 and getting it working with non-AT&T networks again. They aren't pretty, but it sounds like they work.
First, Gizmodo details a method for restoring your iPhone back to the previous firmware version using the updater software that is surprisingly left behind on your machine (as long as you did a 1.0.2 update on your iPhone sometime in the past, of course). Part of the process involves mucking around with updater files and packages, as well as the latest iNDependence software (unfortunately Mac-only for now) to get the iPhone pseudo-activated again. However, Gizmodo also reports that you won't be able to do anything but use WiFi and your 1.0.2-friendly third-party apps until you pick up a Turbo SIM to get the iPhone fully activated and working with the original unlock again.
Like we said: it ain't pretty, but it works.
iPhones that were unlocked with the commercial iPhoneSIMFree product are reportedly working with the 1.1.1 firmware. Fortunately, for those who don't want to pay iPhoneSIMFree's peace-of-mind price, the iPhone Dev Team hackers are working on smoothing out the process for downgrading iPhones and restoring the unlock, as well as an unlock compatible with the new firmware. It is indeed a cat and mouse game, so be sure to drop those valiant iPhone hackers some donations for all their hard work.
People for Internet Responsibility (PFIR) co-founder Lauren Weinstein has issued a proposal for a global Internet traffic analysis system capable of automatically detecting prejudicial bandwidth manipulation. Weinstein believes that implementation of his proposal could put an end to the impasse that has stalled the network neutrality debate.
Network neutrality is a model of broadband network operation that does not distinguish between different kinds of traffic for prioritization purposes. Applied to the Internet, network neutrality generally implies that all forms of traffic—regardless of the nature, source, or recipient—are given equal treatment and transmitted without selective degradation. The aim is to prevent the construction of a so-called tiered Internet, which critics argue would lead to widespread quality of service (QoS) discrimination that would stifle freedom of expression on the Internet and allow the broadband duopoly to set up exploitative digital toll booths to cash in on content delivery. Supporters of atiered Internet argue that network neutrality would impede innovation and degrade network operator property rights. The debate has become increasingly hostile, and little headway has been made.
Getting the facts, and acting on them
A recent proposal issued by PFIR aims to offer a more constructive way to move the net neutrality debate forward. The proposal suggests establishing a distributed global Internet traffic monitoring system that would facilitate rapid detection of abusive network manipulation. At a minimum, this system could be used to provide insight and statistical data so that legislators can make informed decisions about what regulatory solutions are actually needed, if any.
PFIR says the system could also be used for a real-time network neutrality enforcement framework. Legislators could craft a set of uniform network handling standards and an automated system could be devised to leverage the monitoring statistics and impose corrective sanctions when deviations are detected. The standards could be adjusted as needed in order to limit any potential negative impact on innovation.
"This proposal, if implemented from both the technological measurements standpoint and on a legislative basis to whatever degree may be deemed appropriate, would offer what amounts to a 'status quo' operating environment to ISPs so long as they continued to compete in an open, fair, and nondiscriminatory manner, but would enable the promise of quick and decisive corrective actions in the face of any specific abuses as detected by, and defined in conjunction with, the proposed global Internet measurement infrastructure," says the proposal. "Triggers and remedies under the approach proposed here would be as specific and quantitatively precise as possible, and only activated in the face of defined violation conditions based on the hard data from the measurement environment. In the absence of any defined abuse conditions being triggered, ISPs and related operations would proceed on a free market basis without new constraints."
The concept described in the proposal is intriguing on several different levels, but the costs and challenges of creating such a massive monitoring system should be questioned. In some respects, a fully automated system is preferable to other kinds of regulatory proposals because it would reduce the potential for inconsistent enforcement. Most conventional Net Neutrality regulatory solutions that have been proposed thus far would empower government agencies like the FCC, which aren't necessarily more reliable than the Internet service providers themselves. An automated solution—assuming that it could be devised in a manner that prevents outright manipulation—would be far more transparent and less susceptible to the frailties of bureaucracy.
On the other hand, there are clear cost and privacy concerns that will afflict any kind of Internet traffic monitoring system of this scale. In order to make this proposal viable, PFIR will have to address such concerns and provide more specific implementation details.
AT&T has rolled out new Terms of Service for its DSL service that leave plenty of room for interpretation. From our reading of it, in concert with several others, what we see is a ToS that attempts to give AT&T the right to disconnect its own customers who criticize the company on blogs or in other online settings.
In section 5 of its legal ToS, AT&T stipulates the following:
AT&T may immediately terminate or suspend all or a portion of your Service, any Member ID, electronic mail address, IP address, Universal Resource Locator or domain name used by you, without notice, for conduct that AT&T believes (a) violates the Acceptable Use Policy; (b) constitutes a violation of any law, regulation or tariff (including, without limitation, copyright and intellectual property laws) or a violation of these TOS, or any applicable policies or guidelines, or (c) tends to damage the name or reputation of AT&T, or its parents, affiliates and subsidiaries.
Translation: "conduct" that AT&T "believes" "tends to damage" its name, or the name of its partners, can get you booted off the service. Note the use of "tends to damage": the language of the contract does not require any proof of any actual damage.
The story, which surfaced at the venerable Slashdot, has many people outraged and is being discussed as a prime example of why net neutrality is needed. I think that puts the cart before the horse, however. Here's why.
There's nothing which guarantees that what AT&T is doing here is either legal or what the company intends. This wouldn't be the first time that poorly thought-out legal language made it into a contract used by a major corporation. Why are we thinking it's an oversight? Simple: we believe that AT&T isn't misguided enough to expect to be able to squash First Amendment rights with a ToS contract without losing both face and their cozy legal status.
As an Internet service provider, AT&T itself is protected from lawsuits relating to the distribution of illegal materials online because they are excused from having to monitor and police their own networks from such activity. They are also protected against what their users say and do online. For instance, if I'm an AT&T customer and I posted damaging comments about Vodafone using AT&T's service, Vodafone can't go after AT&T just because they're my (fictional) ISP. Yet if AT&T begins to monitor and police its own network to protect its own corporate identity, the company will be setting itself up for lawsuits from parties looking for the same protections as AT&T grants itself. In this way, AT&T has to tread reasonably.
Even more important, should AT&T ever attempt to exercise this contractual "right," it will do far more harm to its "name" than the user(s) in question could have ever done… if what's shut down is just a regular user expressing typical criticism of a corporation. The backlash would be intense, to say the least.
We've requested clarification of the issue, but we'd also like to note that AT&T also reserves the right to disconnect users with "insecure" computers, and we've not heard of this happening, either. It may be nothing more than a toothless scare tactic, or it may be focused on something more insidious than mere criticism of the company. As it is currently worded, however, plenty of AT&T customers are concerned, if my inbox is any estimation.
It's not often that one gets a chance to attend a demonstration of a new method of human-computer interaction. Having been too young to witness the development of the command line in the 1950s or the modern graphical user interface at Xerox PARC in the 1970s, it was a genuine thrill to visit Microsoft's campus for a personal demo of "surface computing." While future computer historians are unlikely to view this technology as being anywhere near as groundbreaking as the CLI or GUI, the multi-touch interface nonetheless serves as an innovative way of interacting with the personal computer.
Microsoft Surface has taken many years to come to fruition. The original idea was developed in 2001 by employees at Microsoft Research and Microsoft Hardware, and it was nurtured towards reality by a team that included architectNigel Keam. Not content with merely coming up with a new idea, the Surface team is committed to actually releasing it to the commercial market as early as the end of 2007. From there, the team hopes that the product will make its way from retail and commercial establishments to the home, in much the same manner as large-screen plasma displays have migrated out of the stadium and into the living room over the past few years.
Microsoft began the Surface project back in 2001, after the idea had already been proposed by employees in the Microsoft Research division. For many years the work was hidden under a non-disclosure agreement. Keam mentioned that, although necessary, the NDA made it frustrating when Microsoft scheduled the official Surface announcement just days after Apple announced the iPhone. While both projects employ touch-sensitive screens with multi-touch capability, they are very different from each other, and the development timelines clearly show that neither was "copied" from the other. As Keam put it: "I only wish I could work that fast!"
Beyond creating the hardware, however, the Microsoft Surface team has identified several different scenarios where the device could be used in retail and commercial environments, and it has developed demonstration software that shows off the potential of the system. Microsoft has partnered with several retail and entertainment companies and will be co-developing applications customized for these environments.
Let's take a look.
Senior marketing director Mark Bolger models Surface
Essentially, Microsoft Surface is a computer embedded in a medium-sized table, with a large, flat display on top that is touch-sensitive. The software reacts to the touch of any object, including human fingers, and can track the presence and movement of many different objects at the same time. In addition to sensing touch, the Microsoft Surface unit can detect objects that are labeled with small "domino" stickers, and in the future, it will identify devices via radio-frequency identification (RFID) tags.
The demonstration unit I used was housed in an attractive glass table about three feet high, with a solid base that hides a fairly standard computer equipped with an Intel Core 2 Duo processor, an AMI BIOS, 2 GB of RAM, and Windows Vista. The team lead would not divulge which graphics card was inside, but they said that it was a moderately-powerful graphics card from either AMD/ATI or NVIDIA.
The display screen is a 4:3 rear-projected DLP display measuring30 inches diagonally. The screen resolution is a relatively modest 1024×768, but the touchdetection system had an effective resolution of 1280×960. Unlike the screen resolution, which for the time being is constant, the touchresolutionvaries according to the size of the screen used—it is designed to work at a resolution of48 dots per inch. The top layeralso works as a diffuser, making the display clearly visible at any angle.
Unlike most touch screens, Surface does not use heat or pressure sensors to indicate when someone has touched the screen.Instead,five tiny cameras take snapshots of the surface many times a second, similar to how an optical mouse works, but on a larger scale.This allows Surface to capture many simultaneous touches and makes it easier to track movement, although the disadvantage is that the system cannot (at the moment) sense pressure.
Five cameras mounted beneath the table read objects and touches on the acrylic surface above, which is flooded with near-infrared light to make such touches easier to pick out. Thecameras can read a nearly infinite number of simultaneous touches and are limited only by processing power. Right now,Surface isoptimized for 52 touches, or enough for four people to use all 10 fingers at once and still have 12 objects sitting on the table. (For more on the camera system and hardware, check out ourlaunch coverageof the system).
The unit is rugged and designed to take all kinds of abuse. Senior director of marketing Mark Bolger demonstrated this quite dramatically by slamming his hand onto the top of the screen as hard as he could—it made a loud thump, but the unit itself didn't move. The screen is also water resistant. At an earlier demonstration, a skeptical reporter tested this by pouring his drink all over the device. Microsoft has designed the unit to put up with this kind of punishment because it envisions Surface being used in environments such as restaurants where hard impacts and spills are always on the menu.
The choice of a 4:3 screen was, according to Nigel Keam, mostly a function of the availability of light engines (projectors) when the project began. Testing and user feedback have shown that the 4:3 ratio works well, and the addition of a slight amount of extraacrylic on each side leaves the table looking like it has normal dimensions.
Built-in wireless and Bluetooth round out the hardware capabilities of Surface. A Bluetooth keyboard with a built-in trackpad is available to diagnose problems with the unit, although for regular use it is not required.
States across the country are having trouble crafting laws to control digital content without running afoul of constitutional protections on speech. And it's not just video games, though these laws have achieved the most notoriety. The state of Ohio this week found itself on the losing end of a judicial decision that struck down a state law banning all Internet transmission of content that is "harmful to minors" if the sender should know that someone on the receiving end is a minor.
Federal judge Walter Rice issued a permanent injunction against the law after pointing out the many ways that it could restrict legitimate speech between adults. The Supreme Court has already found that Internet users have reason to believe that there are minors present in every chat room, for instance; under the Ohio law, anything deemed obscene for minors that was uttered in a chat room could therefore possibly lead to prosecution.
The law was first passed in 2002 and immediately challenged by Media Coalition, a group of booksellers, newspapers, and providers of sexual health information. The judge soon granted a preliminary injunction, and the state amended the law in order to avoid several of the original problems. Despite the changes, Judge Rice still found the bill too broad when applied to the Internet, where it can be difficult or impossible to know the ages of every person that one interacts with.
Media Coalition's executive director, David Horowitz, praised the decision. "While we should have adequate legal safeguards to shield children from objectionable content," he said, "those safeguards cannot unreasonably interfere with the rights of adults to have access to materials that are legal for them."
As in several of the video game lawsuits, the state may be on the hook for attorneys' fees in this case. What makes the entire episode so galling is the fact that Ohio legislators were warned about precisely these problems when they drafted the bill. Back in 2002, when Media Coalition filed its lawsuit, co-counsel H. Louis Sirkin pointed out that the "the legislature and Governor were repeatedly informed of the unconstitutionality" of these measures, yet pressed ahead anyway.
Declan McCullagh of CNet suggests that politicians be forced to pay out of their own pockets when these sorts of avoidable costs end up being paid by taxpayers. Given that the legislators would be the ones who would need to pass such a bill, this is… unlikely, though it would certainly give legislators a good reason to proceed carefully when drafting new legislation.
According to the Dayton Daily News, the state is still deciding whether it will attempt an appeal.
CNet has a more detailed description of the judge's reasoningMedia Coalition's original 2002 press release (PDF)The judge's decision (PDF) Continue reading