- Read B4UCopy: software industry targets students with antipiracy site
- Copyright lawyer tells universities to resist “copyright bullies”
- Consumer group blasts binding arbitration clauses
- Joost quietly slips 1.0 beta out the door for Mac (and Windows)
- Human genetic diversity through chromosome structure
Monthly Archives: July 2019
If you didn't heed our analysis and Apple's subsequent warning about the new firmware's effects on unlocked iPhones, or if you simply drew the short straw to be the firmware upgrade guinea pig amongst your unlocked iPhone owning friends, relief is in sight. New techniques have appeared for both downgrading your iPhone's firmware back to 1.0.2 and getting it working with non-AT&T networks again. They aren't pretty, but it sounds like they work.
First, Gizmodo details a method for restoring your iPhone back to the previous firmware version using the updater software that is surprisingly left behind on your machine (as long as you did a 1.0.2 update on your iPhone sometime in the past, of course). Part of the process involves mucking around with updater files and packages, as well as the latest iNDependence software (unfortunately Mac-only for now) to get the iPhone pseudo-activated again. However, Gizmodo also reports that you won't be able to do anything but use WiFi and your 1.0.2-friendly third-party apps until you pick up a Turbo SIM to get the iPhone fully activated and working with the original unlock again.
Like we said: it ain't pretty, but it works.
iPhones that were unlocked with the commercial iPhoneSIMFree product are reportedly working with the 1.1.1 firmware. Fortunately, for those who don't want to pay iPhoneSIMFree's peace-of-mind price, the iPhone Dev Team hackers are working on smoothing out the process for downgrading iPhones and restoring the unlock, as well as an unlock compatible with the new firmware. It is indeed a cat and mouse game, so be sure to drop those valiant iPhone hackers some donations for all their hard work.
People for Internet Responsibility (PFIR) co-founder Lauren Weinstein has issued a proposal for a global Internet traffic analysis system capable of automatically detecting prejudicial bandwidth manipulation. Weinstein believes that implementation of his proposal could put an end to the impasse that has stalled the network neutrality debate.
Network neutrality is a model of broadband network operation that does not distinguish between different kinds of traffic for prioritization purposes. Applied to the Internet, network neutrality generally implies that all forms of traffic—regardless of the nature, source, or recipient—are given equal treatment and transmitted without selective degradation. The aim is to prevent the construction of a so-called tiered Internet, which critics argue would lead to widespread quality of service (QoS) discrimination that would stifle freedom of expression on the Internet and allow the broadband duopoly to set up exploitative digital toll booths to cash in on content delivery. Supporters of atiered Internet argue that network neutrality would impede innovation and degrade network operator property rights. The debate has become increasingly hostile, and little headway has been made.
Getting the facts, and acting on them
A recent proposal issued by PFIR aims to offer a more constructive way to move the net neutrality debate forward. The proposal suggests establishing a distributed global Internet traffic monitoring system that would facilitate rapid detection of abusive network manipulation. At a minimum, this system could be used to provide insight and statistical data so that legislators can make informed decisions about what regulatory solutions are actually needed, if any.
PFIR says the system could also be used for a real-time network neutrality enforcement framework. Legislators could craft a set of uniform network handling standards and an automated system could be devised to leverage the monitoring statistics and impose corrective sanctions when deviations are detected. The standards could be adjusted as needed in order to limit any potential negative impact on innovation.
"This proposal, if implemented from both the technological measurements standpoint and on a legislative basis to whatever degree may be deemed appropriate, would offer what amounts to a 'status quo' operating environment to ISPs so long as they continued to compete in an open, fair, and nondiscriminatory manner, but would enable the promise of quick and decisive corrective actions in the face of any specific abuses as detected by, and defined in conjunction with, the proposed global Internet measurement infrastructure," says the proposal. "Triggers and remedies under the approach proposed here would be as specific and quantitatively precise as possible, and only activated in the face of defined violation conditions based on the hard data from the measurement environment. In the absence of any defined abuse conditions being triggered, ISPs and related operations would proceed on a free market basis without new constraints."
The concept described in the proposal is intriguing on several different levels, but the costs and challenges of creating such a massive monitoring system should be questioned. In some respects, a fully automated system is preferable to other kinds of regulatory proposals because it would reduce the potential for inconsistent enforcement. Most conventional Net Neutrality regulatory solutions that have been proposed thus far would empower government agencies like the FCC, which aren't necessarily more reliable than the Internet service providers themselves. An automated solution—assuming that it could be devised in a manner that prevents outright manipulation—would be far more transparent and less susceptible to the frailties of bureaucracy.
On the other hand, there are clear cost and privacy concerns that will afflict any kind of Internet traffic monitoring system of this scale. In order to make this proposal viable, PFIR will have to address such concerns and provide more specific implementation details.
AT&T has rolled out new Terms of Service for its DSL service that leave plenty of room for interpretation. From our reading of it, in concert with several others, what we see is a ToS that attempts to give AT&T the right to disconnect its own customers who criticize the company on blogs or in other online settings.
In section 5 of its legal ToS, AT&T stipulates the following:
AT&T may immediately terminate or suspend all or a portion of your Service, any Member ID, electronic mail address, IP address, Universal Resource Locator or domain name used by you, without notice, for conduct that AT&T believes (a) violates the Acceptable Use Policy; (b) constitutes a violation of any law, regulation or tariff (including, without limitation, copyright and intellectual property laws) or a violation of these TOS, or any applicable policies or guidelines, or (c) tends to damage the name or reputation of AT&T, or its parents, affiliates and subsidiaries.
Translation: "conduct" that AT&T "believes" "tends to damage" its name, or the name of its partners, can get you booted off the service. Note the use of "tends to damage": the language of the contract does not require any proof of any actual damage.
The story, which surfaced at the venerable Slashdot, has many people outraged and is being discussed as a prime example of why net neutrality is needed. I think that puts the cart before the horse, however. Here's why.
There's nothing which guarantees that what AT&T is doing here is either legal or what the company intends. This wouldn't be the first time that poorly thought-out legal language made it into a contract used by a major corporation. Why are we thinking it's an oversight? Simple: we believe that AT&T isn't misguided enough to expect to be able to squash First Amendment rights with a ToS contract without losing both face and their cozy legal status.
As an Internet service provider, AT&T itself is protected from lawsuits relating to the distribution of illegal materials online because they are excused from having to monitor and police their own networks from such activity. They are also protected against what their users say and do online. For instance, if I'm an AT&T customer and I posted damaging comments about Vodafone using AT&T's service, Vodafone can't go after AT&T just because they're my (fictional) ISP. Yet if AT&T begins to monitor and police its own network to protect its own corporate identity, the company will be setting itself up for lawsuits from parties looking for the same protections as AT&T grants itself. In this way, AT&T has to tread reasonably.
Even more important, should AT&T ever attempt to exercise this contractual "right," it will do far more harm to its "name" than the user(s) in question could have ever done… if what's shut down is just a regular user expressing typical criticism of a corporation. The backlash would be intense, to say the least.
We've requested clarification of the issue, but we'd also like to note that AT&T also reserves the right to disconnect users with "insecure" computers, and we've not heard of this happening, either. It may be nothing more than a toothless scare tactic, or it may be focused on something more insidious than mere criticism of the company. As it is currently worded, however, plenty of AT&T customers are concerned, if my inbox is any estimation.
It's not often that one gets a chance to attend a demonstration of a new method of human-computer interaction. Having been too young to witness the development of the command line in the 1950s or the modern graphical user interface at Xerox PARC in the 1970s, it was a genuine thrill to visit Microsoft's campus for a personal demo of "surface computing." While future computer historians are unlikely to view this technology as being anywhere near as groundbreaking as the CLI or GUI, the multi-touch interface nonetheless serves as an innovative way of interacting with the personal computer.
Microsoft Surface has taken many years to come to fruition. The original idea was developed in 2001 by employees at Microsoft Research and Microsoft Hardware, and it was nurtured towards reality by a team that included architectNigel Keam. Not content with merely coming up with a new idea, the Surface team is committed to actually releasing it to the commercial market as early as the end of 2007. From there, the team hopes that the product will make its way from retail and commercial establishments to the home, in much the same manner as large-screen plasma displays have migrated out of the stadium and into the living room over the past few years.
Microsoft began the Surface project back in 2001, after the idea had already been proposed by employees in the Microsoft Research division. For many years the work was hidden under a non-disclosure agreement. Keam mentioned that, although necessary, the NDA made it frustrating when Microsoft scheduled the official Surface announcement just days after Apple announced the iPhone. While both projects employ touch-sensitive screens with multi-touch capability, they are very different from each other, and the development timelines clearly show that neither was "copied" from the other. As Keam put it: "I only wish I could work that fast!"
Beyond creating the hardware, however, the Microsoft Surface team has identified several different scenarios where the device could be used in retail and commercial environments, and it has developed demonstration software that shows off the potential of the system. Microsoft has partnered with several retail and entertainment companies and will be co-developing applications customized for these environments.
Let's take a look.
Senior marketing director Mark Bolger models Surface
Essentially, Microsoft Surface is a computer embedded in a medium-sized table, with a large, flat display on top that is touch-sensitive. The software reacts to the touch of any object, including human fingers, and can track the presence and movement of many different objects at the same time. In addition to sensing touch, the Microsoft Surface unit can detect objects that are labeled with small "domino" stickers, and in the future, it will identify devices via radio-frequency identification (RFID) tags.
The demonstration unit I used was housed in an attractive glass table about three feet high, with a solid base that hides a fairly standard computer equipped with an Intel Core 2 Duo processor, an AMI BIOS, 2 GB of RAM, and Windows Vista. The team lead would not divulge which graphics card was inside, but they said that it was a moderately-powerful graphics card from either AMD/ATI or NVIDIA.
The display screen is a 4:3 rear-projected DLP display measuring30 inches diagonally. The screen resolution is a relatively modest 1024×768, but the touchdetection system had an effective resolution of 1280×960. Unlike the screen resolution, which for the time being is constant, the touchresolutionvaries according to the size of the screen used—it is designed to work at a resolution of48 dots per inch. The top layeralso works as a diffuser, making the display clearly visible at any angle.
Unlike most touch screens, Surface does not use heat or pressure sensors to indicate when someone has touched the screen.Instead,five tiny cameras take snapshots of the surface many times a second, similar to how an optical mouse works, but on a larger scale.This allows Surface to capture many simultaneous touches and makes it easier to track movement, although the disadvantage is that the system cannot (at the moment) sense pressure.
Five cameras mounted beneath the table read objects and touches on the acrylic surface above, which is flooded with near-infrared light to make such touches easier to pick out. Thecameras can read a nearly infinite number of simultaneous touches and are limited only by processing power. Right now,Surface isoptimized for 52 touches, or enough for four people to use all 10 fingers at once and still have 12 objects sitting on the table. (For more on the camera system and hardware, check out ourlaunch coverageof the system).
The unit is rugged and designed to take all kinds of abuse. Senior director of marketing Mark Bolger demonstrated this quite dramatically by slamming his hand onto the top of the screen as hard as he could—it made a loud thump, but the unit itself didn't move. The screen is also water resistant. At an earlier demonstration, a skeptical reporter tested this by pouring his drink all over the device. Microsoft has designed the unit to put up with this kind of punishment because it envisions Surface being used in environments such as restaurants where hard impacts and spills are always on the menu.
The choice of a 4:3 screen was, according to Nigel Keam, mostly a function of the availability of light engines (projectors) when the project began. Testing and user feedback have shown that the 4:3 ratio works well, and the addition of a slight amount of extraacrylic on each side leaves the table looking like it has normal dimensions.
Built-in wireless and Bluetooth round out the hardware capabilities of Surface. A Bluetooth keyboard with a built-in trackpad is available to diagnose problems with the unit, although for regular use it is not required.
States across the country are having trouble crafting laws to control digital content without running afoul of constitutional protections on speech. And it's not just video games, though these laws have achieved the most notoriety. The state of Ohio this week found itself on the losing end of a judicial decision that struck down a state law banning all Internet transmission of content that is "harmful to minors" if the sender should know that someone on the receiving end is a minor.
Federal judge Walter Rice issued a permanent injunction against the law after pointing out the many ways that it could restrict legitimate speech between adults. The Supreme Court has already found that Internet users have reason to believe that there are minors present in every chat room, for instance; under the Ohio law, anything deemed obscene for minors that was uttered in a chat room could therefore possibly lead to prosecution.
The law was first passed in 2002 and immediately challenged by Media Coalition, a group of booksellers, newspapers, and providers of sexual health information. The judge soon granted a preliminary injunction, and the state amended the law in order to avoid several of the original problems. Despite the changes, Judge Rice still found the bill too broad when applied to the Internet, where it can be difficult or impossible to know the ages of every person that one interacts with.
Media Coalition's executive director, David Horowitz, praised the decision. "While we should have adequate legal safeguards to shield children from objectionable content," he said, "those safeguards cannot unreasonably interfere with the rights of adults to have access to materials that are legal for them."
As in several of the video game lawsuits, the state may be on the hook for attorneys' fees in this case. What makes the entire episode so galling is the fact that Ohio legislators were warned about precisely these problems when they drafted the bill. Back in 2002, when Media Coalition filed its lawsuit, co-counsel H. Louis Sirkin pointed out that the "the legislature and Governor were repeatedly informed of the unconstitutionality" of these measures, yet pressed ahead anyway.
Declan McCullagh of CNet suggests that politicians be forced to pay out of their own pockets when these sorts of avoidable costs end up being paid by taxpayers. Given that the legislators would be the ones who would need to pass such a bill, this is… unlikely, though it would certainly give legislators a good reason to proceed carefully when drafting new legislation.
According to the Dayton Daily News, the state is still deciding whether it will attempt an appeal.
CNet has a more detailed description of the judge's reasoningMedia Coalition's original 2002 press release (PDF)The judge's decision (PDF) Continue reading