Showing posts with label google. Show all posts
Showing posts with label google. Show all posts

Saturday, November 22, 2014

Android development is the new .NET

Back in 2001 Microsoft Windows had 90% of the worldwide OS market for PCs. But desktop windows software development was dire.

Developers were using tools like ...
  • Visual Basic 6 - very productive, with the downside of ugly, primitive syntax, lacking modern features such as interfaces and classes.
  • Visual C++/MFC - serious windows developers were using this, but it was less productive and extremely verbose, with lots of boilerplate.
  • Delphi -  the successor of Turbo Pascal and possibly the best of all frameworks on Windows at the time, but it never really took off.
  • Borland C++/OWL - the Open Windows Library from Borland tracked pretty closely with Visual C++/MFC, verbose + boilerplate.
  • Java - this had a reputation on Windows at the time for slow performance, bad visuals and bad tools.
At this point, Microsoft came out with Visual Studio.NET, the .NET framework and a new language named C#, with the help of former Delphi & Turbo Pascal language designer Anders Hejlsberg. Most Windows developers got on board - compared to what came before it was a breath of fresh air. Since that time, .NET has remained a large player, especially in the corporate and business market.

Fast forward to 2014, Android now has an 80% worldwide market share of approx. 1.75 billion smart phones.

Google has their own IDE, Android Studio and the open source Android SDK, which you can program using Java or any other JVM language. In the same way that .NET lets you use different .NET compatible languages such as C++/CLI and F#, Android supports other JVM languages like Scala, Clojure and Groovy, all of which let you cut down on the verbosity and boilerplate of Java.

Google is attempting to bring as many developers over to Android as possible. Their Head of Scalable Developer Relations, Reto Meier, aims to bring 50 million developers to Android.

Based on Android's market share and future employment prospects, Android is a decent choice for developers to try out. Compared to iOS, Android isn't as performant or profitable, but now it has a decent IDE, the weight of numbers, and the capability to use many Java FOSS libraries.

Follow @dodgy_coder

Subscribe to posts via RSS

Wednesday, February 20, 2013

Google's fiber leeching caper

Back in 2000, Google only had data centers on the US west coast and were planning an expansion over to the east coast, to reduce latency to end users. At the time, Google was not hugely profitable like today, and were very conscious of costs. One of the biggest costs of the move was duplicating the data contained in their search indexes over onto the east coast. Google had just passed indexing 1 billion web pages, and had around 9 terabytes of data contained in their indexes. They calculated that even at the highest speed of 1 Gigabit per second, it would take 20 hours to transfer all the data, with a total cost of $250,000.

Larry and Sergey had a plan however, and it centered on exploiting a loophole in the common billing practice known as burstable billing, which is employed by most large bandwidth suppliers. The common practice is to take a bandwidth usage reading every 5 minutes for the whole month. At the end of the month, the top 5% of usage information is discarded, to eliminate spikes (bursts). They reasoned that if they transferred data for less than 5% of the entire month (e.g. for 30 hours), and didn't use the connection at all outside that time, they should be able to get some free bandwidth.

So for 2 nights a month, between 6pm and 6am pacific time, Google pumped the data from their west coast data center to their new east coast location. Outside of these 2 nights, the router was unplugged. At the end of the month the bill came out to be nothing.

They continued like this every month until the contract with their bandwidth supplier ended, and they were forced to negotiate a new one, which meant actually paying for their bandwidth. By this time, Google had started buying up strategically located stretches of fiber, paving the way for its own fiber network to support its increasing bandwidth needs.

Source:

In The Plex: How Google Thinks, Works, and Shapes Our Lives [Amazon]
By Steven Levy
Published: April 12, 2011
See pages 187-188, Steven Levy's interview with Urs Hölzle and Jim Reese.


Sunday, November 4, 2012

The slow decline of PCs and the fast rise of Smartphones/Tablets was predicted in 1993


I've just read some predictions for the future of the PC, written in 1993, by Nathan P. Myhrvold, the former Chief Technology Officer at Microsoft.

His memo is amazingly accurate. Note that his term "IHC" (Information Highway Computer) could be roughly equated with today's smartphone or tablet device, connecting to the Internet via WiFi or a cellular network. In his second last paragraph, Myhrvold predicts the winners will be those who "own the software standards on IHCs" which could be roughly equated with today's app stores, such as those on iOS (Apple), Android (Google, Amazon) and Windows 8 (Microsoft).

The only thing you could say he possibly didn't foresee would be the importance of hardware design in the new smartphone and tablet industry. I'd suggest that Apple achieved such a head start on their competition through a combination of both cutting edge hardware design along with their curated app store model for distributing software. Interestingly, Microsoft has only last month entered the hardware game with their new Surface brand tablets for Windows 8 and Windows RT, and also announced a shift to focus on becoming a "Devices and Services" company.

Note: the term "Cairo" used below is the code name for a Microsoft Research project which lasted from 1991 to 1996. It resulted in some features that were eventually rolled into Windows 95, IIS and SQL Server.


The below is an extract from a memo written by Nathan P. Myhrvold, titled "Road Kill on the Information Highway". September 8, 1993. Full Source
Personal Computers
I've saved the best for last.  Our own industry is also doomed, and will be one of the more significant carcasses by the side of the information highway.  The basic tasks that PCs are used for today will continue for a long as it makes sense to predict, so it isn't a question of the category disappearing.  The question is one of who will continue to satisfy these needs and how? 
As a case in point, consider that the fundamental category needs for mainframes and minicomputers also still exists and will continue to do so for a very long time.  Despite this, the companies involved are dying and the entire genre is likely to disappear.  The reason is that a new breed of machine - the PC - came along which out flanked them.  In the early years PCs were not particularly good at what minis and mainframes did, but they were terrific at a whole new set of problems that the traditional computing infrastructure had basically ignored.  
Personal productivity applications drove PCs onto millions of desks and created a very vital industry which grew faster - both in business terms and price/performance - than the mainframe and minicomputer markets.   The power conferred by this growth made PCs the tail which wagged the dog; free to ignore the standards which existed for mainframes and minis and move off on their own.   Over time the exponential growth in computing has finally (after 17 years) given the PC industry the technical ability to beat minis and mainframes in their own domain.   Although the early software platforms for PCs had to be extended to fully realize this potential (DOS to Windows to NT to Cairo), it turned out to be far easier to do this than to make mainframe or minicomputer systems address the new needs and applications.   Even within the heart of minicomputer and mainframe's domain - giant transaction processing applications etc., the old standards will not be used.  
I believe that the same thing will happen again with PCs playing the role of mainframes and minis, and the computing platforms of the information highway taking over the role of the challenger.   
The technical needs of computers on the information highway, or IHCs are quite different than for PCs.  The killer applications for IHCs in the early years will include video on demand, games, video telephony and other distributed computing tasks on the highway.  It is hard to classify this as either higher tech or lower tech than the software for PCs, because the two are quite different.   Most IHCs will certainly need to be cheaper than PCs by an order of magnitude and this will inevitably cause them to be less capable in many ways, but some of their requirements are far more advanced. 
Another way to say this is that the rich environment of software for PCs is largely irrelevant for IHCs.   Windows, NT, System 7 and Cairo do not solve the really important technical problems required for IHC applications, and it is equally likely that the early generations of IHC software won't be great platforms for PC style apps.  This isn't surprising because they are driven by an orthogonal set of requirements. 
The IHC world will almost certainly grow faster than PCs, both in business terms and in price/performance.   The PC industry is already reaching saturation from a business perspective.  Technically speaking, the industry is mired in hardware standards (Intel and Motorola CISC processors)  with growth rates that are flattening out relative to the state of the art - just as the 360/3090 and VAX architectures did.   The Macintosh and Windows computing environments may be able to survive the painful transition to new RISC architectures, but they will lose time and momentum in doing so.    
PCs will remain paramount within their domain for many years (we'll still have a computer on every desk) but IHCs will start to penetrate a larger and larger customer base on the strength of its new and unique applications.   The power of having the worlds information - and people - on line at any time is too compelling to resist.   For a long time people will still have a traditional PC to handle traditional PC tasks - in precisely the same way that they have kept their mainframes and minis for the last 17 years.   One day however people will realize that their little IHCs are more powerful and cheaper than PCs - just as we have finally done with mainframes.   There will be a challenge for the IHC software folks to write the new systems and applications software necessary to obviate PCs, just as we had to work pretty hard to come up with NT, but this battle will clearly go to the companies who own the software standards on IHCs.  The PC world won't have any more say about how this is done than the companies who created MVS or VMS did about our world.  Of course, some of the VMS people were involved, but as discussed above it is very hard for organizations to make the transition. 
This may sound like a rather dire prediction, but I think that for the most part it is inevitable.  The challenge for Microsoft is to be sufficiently involved with the software for the IHC world that we can be a strong player in that market.  If we do this then we will be able to exploit a certain degree of synergy between IHCs and PCs - there are some natural areas where there is benefit in having the two in sync.  The point made above is that those benefits are not sufficiently strong that they alone will give us a position in the new world.   We'll live or die on the strength of the technology and role that we carve out for ourselves in the brave new world of the information highway. 

Many thanks to Reddit user erpettie who originally submitted a link to this memo on /r/technology,which is how I came across it.

Follow @dodgy_coder

Subscribe to posts via RSS