Longhorn, the new version of the Windows OS is expected to be out by mid 2006. We had previewed it in December 2003 (Longhorn: Windows to come, page 122). One of the key elements of Longhorn was to be WinFS, a new file system, which was to have made file search, retrieval, association and depiction easier. Briefly, WinFS is supposed to store files and their properties, such as name, author, type and much more. Users were to be able to retrieve or relate files, based on these properties, and could visually see and navigate these relationships. It was also supposed to provide better security.
In end August, Microsoft announced that WinFS would not be a part of the Longhorn release. The official reason for the change in plans is to ensure that the shipping dates do not slip beyond the planned 2006.
Longhorn was to be the first major release of Windows since XP was released in end 2001. As software cycles go, six years is a fairly long period between releases. But then these have been no normal years; what with the economic downturn and the effort that Microsoft has had to put into creating the just released XP Service Pack 2. Service Pack 2 is still to be extensively implemented.
How would this affect you?
For hardware vendors who have been struggling with quarter after quarter of damp desktop sales, it was essential that Longhorn made its debut as early as possible, since a new OS is one of those things that can trigger fresh hardware purchases. So, for them WinFS is not as important as a 2006 launch, hence this move.
For rival OS vendors, the delay will give more breathing space; one less feature to compete against, if you may!
What about Joe user or Joe system administrator? Would the absence of WinFS make a less compelling reason for an upgrade? Microsoft says, that the reasons for an upgrade, even sans WinFS, would be compelling given the host of new features that would still make it, including better graphics, and would provide an equally compelling reason for enterprises, with better roaming and device control. But actually, WinFS or not, Joe user would in all possibility not have the luxury of choice, given that Longhorn could soon after launch become the only available version of Windows. The million-dollar question is whether you will actually upgrade or hold on.
Search, the CNet way
I was doing a routine Google search that took me to a page on CNet's news.com site. A box in the middle of the page caught my attention. Welcome Google user, it said (see visual). Now what was that? CNet had run a search on its site using the same keywords that I had used in Google and was presenting to me the results. Copying and pasting the URL into a new browser window brought up the same page, but without the search box.
I have not analyzed how this is done, but at first looks, it looks like Web services at work. Whatever be the case, here is a good example of technology providing better customer service and experience.
Disagreement on spam
SPF (Sender Policy Framework) made news due to disagreements between Microsoft and the Apache foundation, on its adoption.
Current approaches attempt to identify and segregate spam at the receiver's end.
Of late there has been a school of thought that spam is better contained at the sender's end, or at least by identifying (actually by negatively identifying) the origin of spam. SPF (http://spf.pobox.com) belongs to this school of thought.
SPF is championed by Meng Weng Wong, the 28-year old CTO of pobox.com. Meng admits that the idea is not originally his, but was forked from earlier proposals, specifically the RMX record proposal of Hadmut Danisch and the DMP (Designated Mailers Protocol) idea of Gordon
SPF was specifically to addressforged e-mail 'from' addresses. Briefly, every domain, would publish into the SPF directory a list of machines that are authorized to send mail with that domain address. When a server gets mail, it checks against the published SPF record of the domain to determine whether the sender is authentic or not.
Things got interesting when Microsoft teamed up with SPF. Microsoft had initiated an earlier effort in this direction called Caller ID. Caller ID, SPF and another effort, Submitter Optimization, joined forces to create Sender ID. The key opposition came from the fact that Microsoft was patenting the algorithm that was key to the technology. While Microsoft was pushing to make Sender ID a standard and was willing to make the rights available free of cost, opposition built up because of the patent angle.
The Apache foundation and much of the Linux community were very vocal in their rejection of Microsoft's terms for Sender ID. With a majority of Web servers running Apache, their objections do carry a lot of weight. By mid September, a technical working group of the IETF (Internet Engineering Task Force) had voted against Sender ID being accepted as a standard because of the patent claim and the, as yet unresolved implications it held.
Adding insult to injury, as it were, that was not the only problem with SPF. Apparently, more spammers than legitimate users had signed up for SPF, resulting in spam getting 'authenticated' by the system.
Clearly, the hunt for a lasting solution to spam seems to be far from over, and till then, one will have to rely on existing technology, such as Bayesian filtering, to cull out whatever is possible of the tidal wave of spam that makes its way towards us every day.
Tomorrow's CPUs are dual core
Tomorrow's CPUs are dual core
Moving from OSs to CPUs, dual core has been the flavor of the month, with, first, AMD and then Intel demonstrating dual-core CPU-equipped machines. Meanwhile there is news that IBM will ship dual core G5s for Macs. Volume commercial availability of the AMD and Intel systems is scheduled for the middle of 2005.
What are dual-core CPUs and how are they likely to affect you? Before we get into that, a bit of history is in order. Dual-core CPUs are not a new concept. They have been around for some time in higher-end systems from IBM, HP and Sun. The
current buzz is about them becoming a feature of x86 servers, desktops and even notebooks.
So, what are dual-core CPUs? A dual-core CPU, as the name suggests, is two CPUs in the same die, a bit like a dual-processing system, only the two processors are built as one. Dual-core processors take processor performance even further away from the clock speed, and are expected to dramatically improve the ability of computers to handle multiple processor-intensive jobs, simultaneously. The processors could also have the ability of the second-core switching in, on need.
While dual-core processors could significantly improve your computing experience, they could bring up their own set of problems, particularly for software vendors and enterprises.
Enterprise software is traditionally sold on per-seat or per-processor pricings. In the per-processor model, would a dual-core processor be considered as one processor or two? Compounding the issue, AMD plans to bring out quad-core processors by 2007. Very soon, IT managers would need to keep track of not only the number of processors inside their servers, but also the number of cores
inside each of those processors. Welcome to the future of technology!