Back in 2000, Google only had data centers on the US west coast and were planning an expansion over to the east coast, to reduce latency to end users. At the time, Google was not hugely profitable like today, and were very conscious of costs. One of the biggest costs of the move was duplicating the data contained in their search indexes over onto the east coast. Google had just passed indexing 1 billion web pages, and had around 9 terabytes of data contained in their indexes. They calculated that even at the highest speed of 1 Gigabit per second, it would take 20 hours to transfer all the data, with a total cost of $250,000.
Larry and Sergey had a plan however, and it centered on exploiting a loophole in the common billing practice known as burstable billing, which is employed by most large bandwidth suppliers. The common practice is to take a bandwidth usage reading every 5 minutes for the whole month. At the end of the month, the top 5% of usage information is discarded, to eliminate spikes (bursts). They reasoned that if they transferred data for less than 5% of the entire month (e.g. for 30 hours), and didn't use the connection at all outside that time, they should be able to get some free bandwidth.
So for 2 nights a month, between 6pm and 6am pacific time, Google pumped the data from their west coast data center to their new east coast location. Outside of these 2 nights, the router was unplugged. At the end of the month the bill came out to be nothing.
They continued like this every month until the contract with their bandwidth supplier ended, and they were forced to negotiate a new one, which meant actually paying for their bandwidth. By this time, Google had started buying up strategically located stretches of fiber, paving the way for its own fiber network to support its increasing bandwidth needs.
In The Plex: How Google Thinks, Works, and Shapes Our Lives [Amazon]
By Steven Levy
Published: April 12, 2011
See pages 187-188, Steven Levy's interview with Urs Hölzle and Jim Reese.
Tuesday, February 19, 2013
Friday, February 1, 2013
Here's some tips on learning a new programming language. They aren't listed in any specific order. Hopefully you'll gain from this at least one new tip that will help you to become proficient in the next language you learn.
- Build something you actually need right now. This could be either a tool you can use in your day job, or something useful you can make use of at home.
Consider these example projects ...
- Write a simple unit testing framework. Many new languages don't have any unit testing frameworks available when they are first introduced. This project will force you to use areas of the language like reflection and meta-programming. Once completed, it becomes useful straight away, for unit testing your future work in the language.
- Implement a disk usage tool; it summarizes the disk usage of all directories on a disk and outputs to the console. It doesn't require overly complex algorithms but touches a lot of the basics: recursion, filesystems, command line parsing and output formatting.
- Implement a backup/archive script which has command line switches to exclude certain file extensions. It should place the backup into a single .zip or .tar.gz file. The project will touch on the following: recursion, filesystems, command line parsing, compression libraries and regular expressions.
- Port an existing, well known program to the new language. Since you are porting it, you already have the application design work done. This frees up your mind to focus on the specifics of the new language. After you've finished you'll then have a good reference to which you can refer back to, when comparing the old with the new language.
- Find a decent book on the language and read through it all as fast as you can. The goal is not to memorize everything, but to soak up the ideas and idioms of the language. Then write some small but non-trivial project. When you get stuck, hopefully it'll trigger a memory of something from the book, and you can go back to refer to it.
- Mix action with equal parts learning (reading books/tutorials). Too much action without learning, and you get a lot of poor quality code. Too much learning without enough action, and you fail to absorb the material to a deep enough level.
- Study reference works on public repositories. Find a medium sized project on GitHub which is written 100% in the language. Read though the code and try to understand it. Look for projects written by the language designers or an acknowledged expert.
As an example, with Go, the standard libraries for the language are written in Go and are open source, e.g. here is part of the strings package. In addition, Brad Fitzpatrick and other members of the Go team have several projects on GitHub that you can read and learn from, e.g. here is a Go client for Memcache.
- Devote large, uninterrupted chunks of time, at least half a day, to learning the new language. Brief, half hour sessions over the course of the week aren’t really useful, because most of the time would be spent just getting back up to speed on what you previously studied.
- Learning a language shouldn’t just be a solitary endeavor. There are plenty of people who have made the same mistakes that you have, so asking for help is a great way to overcome problems when you get stuck. Some possible sources of help online: Language-specific IRC channels, StackOverflow, Twitter, Facebook groups, Quora, Google+, Google groups. You can also submit your finished code to these forums after you've completed a project; people more experienced with the language than you will often be able to identify areas which can be improved or simplified.
- Use an editor with syntax highlighting. Perennial favorites such as Vim and Emacs, plus newer editors such as Sublime Text, feature support for most if not all programming languages and are available for all major operating systems. Some languages are often associated with specific IDEs; these are a good idea when learning a new language. These are generally ...
- Eclipse IDE for Java and Android development.
- Xcode for Objective-C and iOS development (on Mac OSX only).
- Visual Studio IDE for C#, C++, VB.NET, F# (on Windows only).
- Working on a real project with real customers and deadlines is a white hot crucible for learning a new programming language. If you really need to learn a language quickly, then consider taking on a new job which requires it. Once you've got the job, you'll have no other choice but to learn it quickly.
- Finally, a tip from The Pragmatic Programmer, Tip #8 "Invest Regularly in Your Knowledge Portfolio":
"Learn at least one new language every year. Different languages solve the same problems in different ways. By learning several different approaches, you can help broaden your thinking and avoid getting stuck in a rut. Additionally, learning many languages is far easier now, thanks to the wealth of freely available software on the Internet."
Tuesday, January 15, 2013
In "Big Ball of Mud", Brian Foote and Joseph Yoder propose that the default (and most common) software architecture in use is the "Big Ball of Mud" pattern and go on to discuss six additional patterns and activities that it gives rise to: "Throwaway Code", "Piecemeal Growth", "Keep it Working", "Shearing Layers", "Sweep it Under the Rug" and "Reconstruction".
Their original article is located here and can be downloaded in PDF form here: Big Ball of Mud by Brian Foote and Joseph Yoder [PDF download]
I have picked out what I think are the highlights of their article ...
Big Ball of Mud ... alias Shantytown or Spaghetti Code
Shantytowns are usually built from common, cheap materials with simple tools and using unskilled labor. The construction and maintenance of the shantytown is labor intensive, and there is little or no labor specialization - each builder must be a jack of all trades. There's no overall planning, or regulation of future growth.
Too many of our software systems are, architecturally, little more than shantytowns. Investment in tools and infrastructure is often inadequate and the tools that are used are primitive. Parts of the system grow unchecked, and the lack of architecture and planning allows problems in one part of the system to erode and pollute adjacent portions. Deadlines loom like monsoons, and architectural elegance seems unattainable.
The time and money to chase perfection are seldom available and there is a survival at all costs attitude, to do what it takes to get the software working and out the door on time. The biggest cost borne by the Big Ball of Mud development is the lack of a decent architecture.
Common features of Big Ball of Mud Code
- Data structures are haphazardly constructed, or non-existent.
- Everything talks to everything else.
- Important state data is global.
- State data is passed around though Byzantine back channels that circumvent the system's original structure.
- Variable and function names are uninformative and misleading.
- Functions use global variables extensively, as well as long lists of poorly defined parameters.
- Functions themselves are lengthy and convoluted, and perform several unrelated tasks.
- Code duplication.
- The flow of control is hard to understand, and difficult to follow.
- The programmer’s intent is next to impossible to discern.
- The code is simply unreadable, and borders on indecipherable.
- The code exhibits the unmistakable signs of patch after patch at the hands of multiple maintainers, each of whom barely understood the consequences of what he or she was doing.
- Did we mention documentation? What documentation?
Some software engineers come to regard life with the Big Ball of Mud as normal and become skilled at learning to navigate these quagmires, and guiding others through them. Over time, this symbiosis between architecture and skills can change the character of the organization itself, as swamp guides become more valuable than architects.
As per Conway's Law, architects depart in futility, while engineers who have mastered the muddy details of the system they have built, prevail. The code becomes a personal fiefdom, since the author care barely understand it anymore, and no one else can come close. Once simple repairs become all day affairs, as the code turns to mud.
Throwaway Code ... alias Quick Hack or Protoduction
While prototyping a system, you're normally unconcerned with how elegant or efficient your code is. You plan that you will only use it to prove a concept and once the prototype is done, the code will be thrown away and written properly. As the time nears to demonstrate the prototype, the temptation to load it with impressive but utterly inefficient realizations of the system’s expected eventual functionality can be hard to resist. Sometimes, this strategy can be a bit too successful. The client, rather than funding the next phase of the project, may slate the prototype itself for release.
This quick-and-dirty coding is often rationalized as being a stopgap measure. More often than not, the time is never found for this follow up work. The code languishes, while the product flourishes. It becomes a protoduction - a prototype that gets used in production.
Once it becomes evident that the throwaway code is going to be around for a while, you can turn your attention to improving its structure, either through an iterative process of Piecemeal Growth, or via a fresh draft, as discussed in the Reconstruction pattern below.
Piecemeal Growth ... alias Refactoring
Successful software attracts a wider audience, which can, in turn, place a broader range of requirements on it.
When designers are faced with a choice between building something elegant from the ground up, or undermining the architecture of the existing system to quickly address a problem, architecture usually loses.
In the software world, we deploy our most skilled, experienced people early in the lifecycle. Later on, maintenance is often relegated to junior staff, and resources can be scarce. The so-called maintenance phase is the part of the lifecycle in which the price of the fiction of master planning is really paid. It is maintenance programmers who are called upon to bear the burden of coping with the ever widening divergence between fixed designs and a continuously changing world.
Piecemeal growth can be undertaken in an opportunistic fashion, starting with the existing, living, breathing system, and working outward, a step at a time, in such a way as to not undermine the system’s viability. You enhance the program as you use it. Massive system-wide changes are avoided - instead, change is broken down into small, manageable chunks.
Keep It Working ... alias Continuous Integration
Businesses become critically dependent on their software and computing infrastructures. There may be times where taking a system down for a major overhaul can be justified, but usually, doing so is fraught with peril. Therefore, do what it takes to maintain the software and keep it going. Keep it working.
This approach can be used for both minor and major modifications. Large new subsystems might be constructed off to the side, perhaps by separate teams, and integrated with the running system in such a way as to minimize disruption.
A development build of each product can be performed at regular intervals, such as daily or even more often via an automated build tool. Another vital factor in ensuring a system's continued vitality is a commitment to continuous testing, which can be integrated into the automated build process.
Software never stands still. It is often called upon to bear the brunt of changing requirements, because, being as that it is made of bits, it can change.
Over time, the software's frameworks, abstract classes, and components come to embody what we've learned about the structure of the domains for which they are built. More enduring insights gravitate towards the primary structural elements of these systems and change rarely. Parts which find themselves in flux are spun out into the data, where users can interact with them. Software evolution becomes like a centrifuge stirred by change. The layers that result, over time, can come to a much truer accommodation with the forces that shaped them than any top-down planning could have devised.
Sweeping It Under The Rug
At first glance, a Big Ball of Mud can inspire terror and despair in the hearts of those who would try to tame it. The first step on the road to architectural integrity can be to identify the disordered parts of the system, and isolate them from the rest of it.
Overgrown, tangled, haphazard spaghetti code is hard to comprehend, repair, or extend, and tends to grow even worse if it is not somehow brought under control. If you can’t easily make a mess go away, at least cordon it off. This restricts the disorder to a fixed area, keeps it out of sight, and can set the stage for additional refactoring.
Reconstruction ... alias Total Rewrite
One reason to start again might be that the previous system was written by people who are long gone. Doing a rewrite provides new personnel with a way to reestablish contact between the architecture and the implementation. Sometimes the only way to understand a system it is to write it yourself. Doing a fresh draft is a way to overcome neglect. Issues are revisited. A fresh draft adds vigor. You draw back to leap. The quagmire vanishes. The swamp is drained.
When a system becomes a Big Ball of Mud, its relative incomprehensibility may hasten its demise, by making it difficult for it to adapt. It can persist, since it resists change, but cannot evolve, for the same reason. Instead, its inscrutability, even when it is to its short-term benefit, sows the seeds of its ultimate demise.
The above are highlights from the original article, which is located here ... Big Ball of Mud by Brian Foote and Joseph Yoder [PDF download]
Some further reading on Programmers.StackExchange ...
I've inherited 200K lines of spaghetti code — what now?
How to convince my boss that quality is a good thing to have in code?
How to keep a big and complex software product maintainable over the years?
Techniques to re-factor garbage and maintain sanity?
When is code “legacy”?
What is negative code?
Some related books ...
Working Effectively With Legacy Code ... by Michael Feathers [Amazon]
Refactoring: Improving the Design of Existing Code ... by M. Fowler, K. Beck, et al. [Amazon]
Design Patterns: Elements of Reusable Object-Oriented Software ... by the Gang of Four [Amazon]
Patterns of Enterprise Application Architecture ... by Martin Fowler [Amazon]
Domain Driven Design: Tackling Complexity in the Heart of Software ... by Eric Evans [Amazon]
Head First Design Patterns by E. Freeman, E. Freeman, et al. [Amazon]
Thursday, January 3, 2013
Reading through David B. Stewart's paper entitled "Twenty-Five Most Common Mistakes with Real-Time Software Development" (PDF, 131 KB).
There's an interesting nugget of advice at number 8, "The first right answer is the only answer";
So he suggests adapting this into a new practice:
There's an interesting nugget of advice at number 8, "The first right answer is the only answer";
Inexperienced programmers are especially susceptible to assuming that the first right answer they obtain is the only answer. Developing software for embedded systems is often frustrating. It could take days to figure out how to set those registers to get the hardware to do what is wanted. At some point, Eureka! It works. Once it works the programmer removes all the debug code, and puts that code into the module for good. Never shall that code ever change again, because it took so long to debug, nobody wants to break it.As David suggests, when dealing with complex, mission critical or concurrent sections of code, that first success is often not the best solution. Weeks later, you might find that its not performing as well as it should be in production and you have to revisit the section of code again looking for a better solution. But the best time to develop a better solution was back when the job was fresh in your mind, when you developed the original solution.
Unfortunately, that first success is often not the best answer for the task at hand. It is definitely an important step, because it is much easier to improve a working system, than to get the system to work in the first place. However, improving the answer once the first answer has been achieved seems to rarely be done, especially for parts of the code that seem to work fine. Indirectly, however, a poor design that stays might have a tremendous effect, like using up too much processor time or memory, or creating an anomaly in the timing of the system if it executes at a high priority.
As a general rule of thumb, always come up with at least two designs for anything. Quite often, the best design is in fact a compromise of other designs. If a developer can only come up with a single good design, then other experts should be consulted with to obtain alternate designs.
So he suggests adapting this into a new practice:
As a general rule of thumb, always come up with at least two designs for anything.There's multiple problems with this advice:
- YAGNI, you ain't gonna need it. Develop an extra solution only if and when you need it; don't create extra work where it might not be needed.
- Premature optimization is the root of all evil. Knowing when you need to optimize a solution further is almost impossible to do at design time, before any benchmarking or code profiling have been done.
- Lastly, if your solution passed the unit test, yet fails further down the line (in production for example), then that suggests there was a problem with your unit test, and not with your solution. If required, you should add some specific concurrency and/or performance testing to your unit test. This will mean you can then optimize your code, while maintaining a TDD approach.
I notice the original article is actually from 1999, which I believe is before test driven development came into prominence. I think this particular piece of advice ("come up with two designs for anything") might have been ok for some projects back then, but would now be considered flawed and certainly not advisable.
Friday, December 28, 2012
1. Free stuffWhen retailers offer products and services for "free" there's usually an ulterior motive. For online shops, "free" provides the initial incentive for buyers to use their web site, and may be effective in retaining customer loyalty. One of the most common uses of "free" is free delivery.
UK-based The Book Depository is without doubt using the power of free. The title of their home page is "Free delivery worldwide on all books from The Book Depository". This title shows up in google when searching for "book depository", along with the site's description, which is "The Book Depository offers over 8 million books with free delivery worldwide". The site's byline is "The Book Depository. Free delivery worldwide on all our books". Also in a prominent position on the home page is: "Free worldwide delivery" which links to a page listing all of the countries where they offer free delivery.
2. Per customer limits and the perception of product scarcityPer customer limits leads buyers to think that a product is scarce, and increases their incentive to buy now. It also entices people to buy more than they originally intended. The legitimate reason for doing this - to avoid products getting sold on the gray market - is hardly ever a real concern. Retailers will also impose a time limit during which the product is available, along with a ceiling on the number of products that can be sold, after which the deal will end.
Living Social Deal - Laser Nail Therapy Clinic - 70% Off Laser Nail Fungus Removal for Feet ($450)
This group buying deal shows a "Limit 4 per customer" in the fine print. Most group buying sites have per customer limits for all their deals. In additon to the per customer limit, there is an absolute limit on how many deals can be sold, as well as the deal only being available for a limited amount of time, in this case, 7 days.
3. The 9 factor - prices ending in 9, 99 or 95Using prices that end in 9, 99, or 95 is called 'Charm Pricing' or 'Psychological Pricing'. We've been culturally conditioned to associate these prices with discounts. And because we read numbers from left to right, we mentally encode a price like $7.99 as $7, especially when we quickly glance at the price. That's called the "left-digit effect" - it's encoded in our minds before we have finished reading all of the digits.
Pricing of the Amazon Kindle family of products - $499, $199, $119.
As opposed to $500, $200, $120.
4. Easy mathMost sites, when putting a product on sale, will show you what price it was marked down from. It might be "was $20, now $15". You will rarely see something like, "was $20, now $14.22." The reason is that if the difference is easy to calculate, we tend to think it's a better deal. It's called "computation fluency". Another method employed by many sites is to display the amount of the total saving along with the percentage saved.
The Body Shop USA - 50% off Sitewide - After Christmas Sale. Everything is 50% off, making the amount you are saving obvious; you don't need to check any prices or fineprint. Anything you buy on their site will include a significant saving.
5. Sale price font size and colorA common practice in online sales is to display the sale price in a different font size and/or color to the original list price. Typically, the original list price will be shown in strike-though.
Amazon uses red text to denote a sale price, shown below the original list price, which is struck out and shown in light gray. However, products which are not on sale and are being sold at the original list price continue to use this red (sale price) font. One theory says that customers will become accustomed to seeing red text as a saving, and will be more inclined to buy, even when the product is actually not on sale.
6. Dynamic Pricing (also known as Time-Based Pricing)The airline industry is often cited as a dynamic pricing success story. It employs the technique so well that most of the passengers on any given airplane have paid different ticket prices for the same flight. By responding to market fluctuations or large amounts of data gathered from customers - ranging from where they live to what they buy to how much they have spent on past purchases - dynamic pricing allows companies to adjust the prices of identical goods to correspond to a customer’s willingness to pay.
During Thanksgiving week, The New York Times tracked the price of Dance Central 3, a popular Xbox game, as it dropped on Amazon from $49.96 to $24.99 to $15.Example 2
Coca-Cola tested dynamic pricing in automated vending machines where prices would fluctuate based on the surrounding temperature. Their theory was a soft drink would be worth more when it is hotter outside, and correspondingly, demand for soft drinks would decrease if it were cold outside. It was an unpopular idea and luckily, Coca-Cola abandoned it.Sidenote
Oren Etzioni, a computer science professor at the University of Washington, became incensed when he found out that the traveller sitting next to him on a flight got a much better price for his ticket than he did. Etzioni started collecting online price data from all US airlines, then he created his own formula that could predict when the airlines would raise or lower their prices. His company was bought by Microsoft. And today, when you search for flights on Microsoft's Bing Travel site, you'll see colour coded arrows (called the "price predictor") letting you know you if the price of that ticket is likely to head up or down.
7. Pay what you wantPay what you want is a pricing system where buyers pay any desired amount for a given product or service, sometimes including paying nothing (i.e. free). Sometimes a minimum (floor) price may be set, or a suggested price may be indicated to the buyer. The buyer can also pay an amount higher than the standard price.
It has the benefit of reducing buyer's remorse, which is what happens when you decide afterwards that you've paid too much for a product. It also can result in a viral increase in popularity or visibility for a product when used in highly competitive markets.
It is often used for products which use digital delivery, such as software and music. There is usually no additional cost per download to the seller anyway, since the site's bandwidth would usually be paid for on a capped, monthly basis.
Freeware software is often distributed under this model. For example, on the home page of Paint.net is the message "Show your appreciation for Paint.NET and support future development by donating!". Typically, a PayPal donate button is used.
In October 2007, Radiohead released their seventh album, In Rainbows, through the band's website as a digital download and requested fans just pay whatever amount they thought it was worth.Example 3
Introduced during May 2010, the Humble Bundle was a set of six downloadable indie games which were distributed using a pay what you want system (with inclusion of a buyer-controllable charitable contribution).
8. FreemiumFreemium is a business model that works by offering a base product or service free of charge (typically digital offerings such as software, games or online software - software as a service). A premium is then charged for advanced features, functionality, or for related products and services.
LinkedIn - Basic features of the social network are available for free. For the ability to see the profile of members outside your network, and to access advanced features, you'll need to pay.
The New York Times paywall. A limited number of news articles can be read for free per month. To get full access to the site, you'll need to pay.Example 3
Zynga publishes games for Facebook and the major smartphone/tablet app stores. Typically there are in-game actions (e.g. farming) which result in being rewarded with an in-game currency (e.g. loot/coins/points). The in-game currency can then be used to buy additional game features, or to progress further in the game. The in-game currency can be topped up, via a payment to Zynga in real dollars via an in-app-purchase.
9. No dollar signsA 2009 Cornell University study found that diners in upscale restaurants spent significantly less when menus contained the word "dollars" or the symbol "$". Restaurants use the technique of omitting dollar signs to get you to focus on the product being sold (the food) rather than the price. They may also mention or profile the chef who is cooking the food. Apart from the websites of restaurants, its difficult to see this being used much in general online retailing.
Gramercy Tavern, New York City
According to Urbanspoon this is the most popular fine dining restaurant in New York City.There's no dollar signs at all on their website, and their a la carte lunch menu (pdf) doesn't feature any dollar signs either, only numbers.
10. "X for $X"Buyers will often buy more of a product than they originally intended, if it means they will secure a bargain.
Bath and Body Works - 5 for $5 sale
In this case you need to buy 5 products to realise a saving of 33% off the original list price.
Subscribe to posts via RSS
Sunday, December 16, 2012
I've got into the habit of doing this check at the end of any development which has involved some copy & pasting of existing code. It's been useful for me in picking up issues before build/test/commit.
It just involves getting a reference count of all occurrences the new variable (or function) compared with the old variable (or function)...
- Select the newly added variable (or function) name
- Right-click to bring up the context menu
- Select the "Find all references" option
The screenshot shown is from Visual Studio, but this feature is also available in other IDEs and editors, such as Eclipse (via the "References" context menu). Alternatively, if you're not using an IDE, a simple text search across all files in your project will be just as useful, assuming that the search string is unique within your project.
Often, you'd expect the reference count should match with the one you've copy & pasted from (as shown above). If it doesn't match, then by looking at the references which are different, you should be able to easily reason about why, and whether its expected or not. If its unexpected, that could mean there's an error in your code.
So who admits to copy & paste coding - isn't it a sign of bad programming? ... if you can copy & paste big blocks of code, followed by only a few small changes then that's a code smell - you should probably extract the common code into a new function and call it twice. Agreed on that, but there is still many cases where you do have a valid reason to copy existing code. One example is to make fine grained changes to some logic, when it might not be worth the additional complexity of creating a generic function to handle both cases. Or in the case shown above, when I'm copying the declaration of a collection member.
This is a pretty simple and obvious technique, and it might be well known to many already, but I thought I'd mention it anyway. It can also be used outside of the copy & paste case - just mentioning it here because that's the case I've used it more often.
Sunday, November 4, 2012
I've just read some predictions for the future of the PC, written in 1993, by Nathan P. Myhrvold, the former Chief Technology Officer at Microsoft.
His memo is amazingly accurate. Note that his term "IHC" (Information Highway Computer) could be roughly equated with today's smartphone or tablet device, connecting to the Internet via WiFi or a cellular network. In his second last paragraph, Myhrvold predicts the winners will be those who "own the software standards on IHCs" which could be roughly equated with today's app stores, such as those on iOS (Apple), Android (Google, Amazon) and Windows 8 (Microsoft).
The only thing you could say he possibly didn't foresee would be the importance of hardware design in the new smartphone and tablet industry. I'd suggest that Apple achieved such a head start on their competition through a combination of both cutting edge hardware design along with their curated app store model for distributing software. Interestingly, Microsoft has only last month entered the hardware game with their new Surface brand tablets for Windows 8 and Windows RT, and also announced a shift to focus on becoming a "Devices and Services" company.
Note: the term "Cairo" used below is the code name for a Microsoft Research project which lasted from 1991 to 1996. It resulted in some features that were eventually rolled into Windows 95, IIS and SQL Server.
The below is an extract from a memo written by Nathan P. Myhrvold, titled "Road Kill on the Information Highway". September 8, 1993. Full Source.
I've saved the best for last. Our own industry is also doomed, and will be one of the more significant carcasses by the side of the information highway. The basic tasks that PCs are used for today will continue for a long as it makes sense to predict, so it isn't a question of the category disappearing. The question is one of who will continue to satisfy these needs and how?
As a case in point, consider that the fundamental category needs for mainframes and minicomputers also still exists and will continue to do so for a very long time. Despite this, the companies involved are dying and the entire genre is likely to disappear. The reason is that a new breed of machine - the PC - came along which out flanked them. In the early years PCs were not particularly good at what minis and mainframes did, but they were terrific at a whole new set of problems that the traditional computing infrastructure had basically ignored.
Personal productivity applications drove PCs onto millions of desks and created a very vital industry which grew faster - both in business terms and price/performance - than the mainframe and minicomputer markets. The power conferred by this growth made PCs the tail which wagged the dog; free to ignore the standards which existed for mainframes and minis and move off on their own. Over time the exponential growth in computing has finally (after 17 years) given the PC industry the technical ability to beat minis and mainframes in their own domain. Although the early software platforms for PCs had to be extended to fully realize this potential (DOS to Windows to NT to Cairo), it turned out to be far easier to do this than to make mainframe or minicomputer systems address the new needs and applications. Even within the heart of minicomputer and mainframe's domain - giant transaction processing applications etc., the old standards will not be used.
I believe that the same thing will happen again with PCs playing the role of mainframes and minis, and the computing platforms of the information highway taking over the role of the challenger.
The technical needs of computers on the information highway, or IHCs are quite different than for PCs. The killer applications for IHCs in the early years will include video on demand, games, video telephony and other distributed computing tasks on the highway. It is hard to classify this as either higher tech or lower tech than the software for PCs, because the two are quite different. Most IHCs will certainly need to be cheaper than PCs by an order of magnitude and this will inevitably cause them to be less capable in many ways, but some of their requirements are far more advanced.
Another way to say this is that the rich environment of software for PCs is largely irrelevant for IHCs. Windows, NT, System 7 and Cairo do not solve the really important technical problems required for IHC applications, and it is equally likely that the early generations of IHC software won't be great platforms for PC style apps. This isn't surprising because they are driven by an orthogonal set of requirements.
The IHC world will almost certainly grow faster than PCs, both in business terms and in price/performance. The PC industry is already reaching saturation from a business perspective. Technically speaking, the industry is mired in hardware standards (Intel and Motorola CISC processors) with growth rates that are flattening out relative to the state of the art - just as the 360/3090 and VAX architectures did. The Macintosh and Windows computing environments may be able to survive the painful transition to new RISC architectures, but they will lose time and momentum in doing so.
PCs will remain paramount within their domain for many years (we'll still have a computer on every desk) but IHCs will start to penetrate a larger and larger customer base on the strength of its new and unique applications. The power of having the worlds information - and people - on line at any time is too compelling to resist. For a long time people will still have a traditional PC to handle traditional PC tasks - in precisely the same way that they have kept their mainframes and minis for the last 17 years. One day however people will realize that their little IHCs are more powerful and cheaper than PCs - just as we have finally done with mainframes. There will be a challenge for the IHC software folks to write the new systems and applications software necessary to obviate PCs, just as we had to work pretty hard to come up with NT, but this battle will clearly go to the companies who own the software standards on IHCs. The PC world won't have any more say about how this is done than the companies who created MVS or VMS did about our world. Of course, some of the VMS people were involved, but as discussed above it is very hard for organizations to make the transition.
This may sound like a rather dire prediction, but I think that for the most part it is inevitable. The challenge for Microsoft is to be sufficiently involved with the software for the IHC world that we can be a strong player in that market. If we do this then we will be able to exploit a certain degree of synergy between IHCs and PCs - there are some natural areas where there is benefit in having the two in sync. The point made above is that those benefits are not sufficiently strong that they alone will give us a position in the new world. We'll live or die on the strength of the technology and role that we carve out for ourselves in the brave new world of the information highway.
Many thanks to Reddit user erpettie who originally submitted a link to this memo on /r/technology,which is how I came across it.
Subscribe to posts via RSS