Saturday, August 31, 2013

Evil User Interface Design

Thanks to Harry Brignull from Dark Patterns for bringing up the interesting topic of evil user interfaces; UIs designed to trick you into spending more money, or buying a product you don't want. Here's the best example from their slidedeck - the Ryanair booking form. I've added some explanatory comments below each screenshot.

1. Ryanair home page.
Loads of sale fares for just £5, looks great ... lets buy a ticket ...


2. The booking form.
Looks straightforward so far, no issues yet.


3. Passenger details section.
Notice the default option for the insurance dropdown is "Please select a country of residence".
If you're not paying attention, you'll assume it is just an address question, and so you'll select your country of residence. In doing so, you are actually selecting to pay for travel insurance ... its designed to trick you into selecting it by mistake.


4. Travel insurance dropdown.
 If you don't want travel insurance, then you have to select the correct option, "No Travel Insurance",
which is listed between Latvia and Lithuania.


5. On error, the dropdown clears your selection.
If you make a mistake anywhere on the form, such as shown above, forgetting to select "Yes" or "No" for the priority boarding question, then when you try and submit it, the insurance dropdown will default back to its original option, which was "Please select a country of residence". They really want you to pay for travel insurance, and this is their last chance of getting an extra few percent of users to buy it.


Follow @dodgy_coder

Subscribe to posts via RSS

Saturday, June 22, 2013

Hyperlink vs Button in Android

After installing the Google Play Music app, the first screen allows you to choose which Google account to associate with it.

Seems straightforward enough; but have a look at the screen - what you would press?



Looks like there's a choice of two commands, either Add account or Not now. But there's another command there ... the email address is a clickable hyperlink, but there's no indication that it is - no underline or radically different font. The Add account button actually prompts you to type in a new email address, i.e. not the one listed. One of the most confusing UI designs I've seen lately.

As discussed in this StackExchange.UX post, there's no strict rule about it, but buttons usually perform a command and hyperlinks usually take you somewhere new.

To improve it, maybe there should be a "Use" button to the right of each email address.

Follow @dodgy_coder


Subscribe to posts via RSS

Wednesday, February 20, 2013

Google's fiber leeching caper

Back in 2000, Google only had data centers on the US west coast and were planning an expansion over to the east coast, to reduce latency to end users. At the time, Google was not hugely profitable like today, and were very conscious of costs. One of the biggest costs of the move was duplicating the data contained in their search indexes over onto the east coast. Google had just passed indexing 1 billion web pages, and had around 9 terabytes of data contained in their indexes. They calculated that even at the highest speed of 1 Gigabit per second, it would take 20 hours to transfer all the data, with a total cost of $250,000.

Larry and Sergey had a plan however, and it centered on exploiting a loophole in the common billing practice known as burstable billing, which is employed by most large bandwidth suppliers. The common practice is to take a bandwidth usage reading every 5 minutes for the whole month. At the end of the month, the top 5% of usage information is discarded, to eliminate spikes (bursts). They reasoned that if they transferred data for less than 5% of the entire month (e.g. for 30 hours), and didn't use the connection at all outside that time, they should be able to get some free bandwidth.

So for 2 nights a month, between 6pm and 6am pacific time, Google pumped the data from their west coast data center to their new east coast location. Outside of these 2 nights, the router was unplugged. At the end of the month the bill came out to be nothing.

They continued like this every month until the contract with their bandwidth supplier ended, and they were forced to negotiate a new one, which meant actually paying for their bandwidth. By this time, Google had started buying up strategically located stretches of fiber, paving the way for its own fiber network to support its increasing bandwidth needs.

Source:

In The Plex: How Google Thinks, Works, and Shapes Our Lives [Amazon]
By Steven Levy
Published: April 12, 2011
See pages 187-188, Steven Levy's interview with Urs Hölzle and Jim Reese.


Saturday, February 2, 2013

How to learn a new programming language

Here's some tips on learning a new programming language. They aren't listed in any specific order. Hopefully you'll gain from this at least one new tip that will help you to become proficient in the next language you learn.

  1. Build something you actually need right now. This could be either a tool you can use in your day job, or something useful you can make use of at home.

    Consider these example projects ...
    • Write a simple unit testing framework. Many new languages don't have any unit testing frameworks available when they are first introduced. This project will force you to use areas of the language like reflection and meta-programming. Once completed, it becomes useful straight away, for unit testing your future work in the language.
    • Implement a disk usage tool; it summarizes the disk usage of all directories on a disk and outputs to the console. It doesn't require overly complex algorithms but touches a lot of the basics: recursion, filesystems, command line parsing and output formatting.
    • Implement a backup/archive script which has command line switches to exclude certain file extensions. It should place the backup into a single .zip or .tar.gz file. The project will touch on the following: recursion, filesystems, command line parsing, compression libraries and regular expressions.
  2. Port an existing, well known program to the new language. Since you are porting it, you already have the application design work done. This frees up your mind to focus on the specifics of the new language. After you've finished you'll then have a good reference to which you can refer back to, when comparing the old with the new language.

  3. Find a decent book on the language and read through it all as fast as you can. The goal is not to memorize everything, but to soak up the ideas and idioms of the language. Then write some small but non-trivial project. When you get stuck, hopefully it'll trigger a memory of something from the book, and you can go back to refer to it.

    Many of the "in a Nutshell" and "Head First" series of books, both published by O'Reilly, are highly rated by readers - they are available for many popular languages (C, C++, C#, Java, Python, JavaScript, PHP).

  4. Mix action with equal parts learning (reading books/tutorials). Too much action without learning, and you get a lot of poor quality code. Too much learning without enough action, and you fail to absorb the material to a deep enough level.

  5. Study reference works on public repositories. Find a medium sized project on GitHub which is written 100% in the language. Read though the code and try to understand it. Look for projects written by the language designers or an acknowledged expert.

    As an example, with Go, the standard libraries for the language are written in Go and are open source, e.g. here is part of the strings package.  In addition, Brad Fitzpatrick and other members of the Go team have several projects on GitHub that you can read and learn from, e.g. here is a Go client for Memcache.

  6. Devote large, uninterrupted chunks of time, at least half a day, to learning the new language. Brief, half hour sessions over the course of the week aren’t really useful, because most of the time would be spent just getting back up to speed on what you previously studied.

  7. Learning a language shouldn’t just be a solitary endeavor. There are plenty of people who have made the same mistakes that you have, so asking for help is a great way to overcome problems when you get stuck. Some possible sources of help online: Language-specific IRC channels, StackOverflow, Twitter, Facebook groups, Quora, Google+, Google groups. You can also submit your finished code to these forums after you've completed a project; people more experienced with the language than you will often be able to identify areas which can be improved or simplified.

  8. Use an editor with syntax highlighting. Perennial favorites such as Vim and Emacs, plus newer editors such as Sublime Text, feature support for most if not all programming languages and are available for all major operating systems. Some languages are often associated with specific IDEs; these are a good idea when learning a new language. These are generally ...
    • Eclipse IDE for Java and Android development.
    • Xcode for Objective-C and iOS development (on Mac OSX only).
    • Visual Studio IDE for C#, C++, VB.NET, F# (on Windows only).
    All of the IDEs and editors listed above are either completely free or have an unlimited trial version available.

  9. Working on a real project with real customers and deadlines is a white hot crucible for learning a new programming language. If you really need to learn a language quickly, then consider taking on a new job which requires it. Once you've got the job, you'll have no other choice but to learn it quickly.
     
  10. Finally, a tip from The Pragmatic Programmer, Tip #8 "Invest Regularly in Your Knowledge Portfolio":
    "Learn at least one new language every year. Different languages solve the same problems in different ways. By learning several different approaches, you can help broaden your thinking and avoid getting stuck in a rut. Additionally, learning many languages is far easier now, thanks to the wealth of freely available software on the Internet." 

Tuesday, January 15, 2013

Big Ball of Mud Design Pattern


In "Big Ball of Mud", Brian Foote and Joseph Yoder propose that the default (and most common) software architecture in use is the "Big Ball of Mud" pattern and go on to discuss six additional patterns and activities that it gives rise to: "Throwaway Code", "Piecemeal Growth", "Keep it Working", "Shearing Layers", "Sweep it Under the Rug" and "Reconstruction".

Their original article is located here and can be downloaded in PDF form here: Big Ball of Mud by Brian Foote and Joseph Yoder [PDF download]

I have picked out what I think are the highlights of their article ...


Big Ball of Mud ... alias Shantytown or Spaghetti Code

Shantytowns are usually built from common, cheap materials with simple tools and using unskilled labor. The construction and maintenance of the shantytown is labor intensive, and there is little or no labor specialization - each builder must be a jack of all trades. There's no overall planning, or regulation of future growth.

Too many of our software systems are, architecturally, little more than shantytowns. Investment in tools and infrastructure is often inadequate and the tools that are used are primitive. Parts of the system grow unchecked, and the lack of architecture and planning allows problems in one part of the system to erode and pollute adjacent portions. Deadlines loom like monsoons, and architectural elegance seems unattainable.

The time and money to chase perfection are seldom available and there is a survival at all costs attitude, to do what it takes to get the software working and out the door on time. The biggest cost borne by the Big Ball of Mud development is the lack of a decent architecture.


Common features of Big Ball of Mud Code
  • Data structures are haphazardly constructed, or non-existent.
  • Everything talks to everything else.
  • Important state data is global.
  • State data is passed around though Byzantine back channels that circumvent the system's original structure.
  • Variable and function names are uninformative and misleading.
  • Functions use global variables extensively, as well as long lists of poorly defined parameters.
  • Functions themselves are lengthy and convoluted, and perform several unrelated tasks.
  • Code duplication.
  • The flow of control is hard to understand, and difficult to follow.
  • The programmer’s intent is next to impossible to discern.
  • The code is simply unreadable, and borders on indecipherable.
  • The code exhibits the unmistakable signs of patch after patch at the hands of multiple maintainers, each of whom barely understood the consequences of what he or she was doing.
  • Did we mention documentation? What documentation?

Working with Big Ball of Mud Code

Some software engineers come to regard life with the Big Ball of Mud as normal and become skilled at learning to navigate these quagmires, and guiding others through them. Over time, this symbiosis between architecture and skills can change the character of the organization itself, as swamp guides become more valuable than architects.

As per Conway's Law, architects depart in futility, while engineers who have mastered the muddy details of the system they have built, prevail. The code becomes a personal fiefdom, since the author care barely understand it anymore, and no one else can come close. Once simple repairs become all day affairs, as the code turns to mud.


Throwaway Code ... alias Quick Hack or Protoduction

While prototyping a system, you're normally unconcerned with how elegant or efficient your code is. You plan that you will only use it to prove a concept and once the prototype is done, the code will be thrown away and written properly. As the time nears to demonstrate the prototype, the temptation to load it with impressive but utterly inefficient realizations of the system’s expected eventual functionality can be hard to resist. Sometimes, this strategy can be a bit too successful. The client, rather than funding the next phase of the project, may slate the prototype itself for release.

This quick-and-dirty coding is often rationalized as being a stopgap measure. More often than not, the time is never found for this follow up work. The code languishes, while the product flourishes. It becomes a protoduction - a prototype that gets used in production.

Once it becomes evident that the throwaway code is going to be around for a while, you can turn your attention to improving its structure, either through an iterative process of Piecemeal Growth, or via a fresh draft, as discussed in the Reconstruction pattern below.


Piecemeal Growth ... alias Refactoring

Successful software attracts a wider audience, which can, in turn, place a broader range of requirements on it.

When designers are faced with a choice between building something elegant from the ground up, or undermining the architecture of the existing system to quickly address a problem, architecture usually loses.

In the software world, we deploy our most skilled, experienced people early in the lifecycle. Later on, maintenance is often relegated to junior staff, and resources can be scarce. The so-called maintenance phase is the part of the lifecycle in which the price of the fiction of master planning is really paid. It is maintenance programmers who are called upon to bear the burden of coping with the ever widening divergence between fixed designs and a continuously changing world.

Piecemeal growth can be undertaken in an opportunistic fashion, starting with the existing, living, breathing system, and working outward, a step at a time, in such a way as to not undermine the system’s viability. You enhance the program as you use it. Massive system-wide changes are avoided - instead, change is broken down into small, manageable chunks.


Keep It Working ... alias Continuous Integration

Businesses become critically dependent on their software and computing infrastructures. There may be times where taking a system down for a major overhaul can be justified, but usually, doing so is fraught with peril. Therefore, do what it takes to maintain the software and keep it going. Keep it working.

This approach can be used for both minor and major modifications. Large new subsystems might be constructed off to the side, perhaps by separate teams, and integrated with the running system in such a way as to minimize disruption.

A development build of each product can be performed at regular intervals, such as daily or even more often via an automated build tool. Another vital factor in ensuring a system's continued vitality is a commitment to continuous testing, which can be integrated into the automated build process.


Shearing Layers

Software never stands still. It is often called upon to bear the brunt of changing requirements, because, being as that it is made of bits, it can change.

Over time, the software's frameworks, abstract classes, and components come to embody what we've learned about the structure of the domains for which they are built. More enduring insights gravitate towards the primary structural elements of these systems and change rarely. Parts which find themselves in flux are spun out into the data, where users can interact with them. Software evolution becomes like a centrifuge stirred by change. The layers that result, over time, can come to a much truer accommodation with the forces that shaped them than any top-down planning could have devised.


Sweeping It Under The Rug

At first glance, a Big Ball of Mud can inspire terror and despair in the hearts of those who would try to tame it. The first step on the road to architectural integrity can be to identify the disordered parts of the system, and isolate them from the rest of it.

Overgrown, tangled, haphazard spaghetti code is hard to comprehend, repair, or extend, and tends to grow even worse if it is not somehow brought under control. If you can’t easily make a mess go away, at least cordon it off. This restricts the disorder to a fixed area, keeps it out of sight, and can set the stage for additional refactoring.


Reconstruction ... alias Total Rewrite

One reason to start again might be that the previous system was written by people who are long gone. Doing a rewrite provides new personnel with a way to reestablish contact between the architecture and the implementation. Sometimes the only way to understand a system it is to write it yourself. Doing a fresh draft is a way to overcome neglect. Issues are revisited. A fresh draft adds vigor. You draw back to leap. The quagmire vanishes. The swamp is drained.

When a system becomes a Big Ball of Mud, its relative incomprehensibility may hasten its demise, by making it difficult for it to adapt. It can persist, since it resists change, but cannot evolve, for the same reason. Instead, its inscrutability, even when it is to its short-term benefit, sows the seeds of its ultimate demise.


The above are highlights from the original article, which is located here ... Big Ball of Mud by Brian Foote and Joseph Yoder [PDF download]


Some further reading on Programmers.StackExchange ...

I've inherited 200K lines of spaghetti code — what now?
How to convince my boss that quality is a good thing to have in code?
How to keep a big and complex software product maintainable over the years?
Techniques to re-factor garbage and maintain sanity?
When is code “legacy”?
What is negative code?


Some related books ...

Working Effectively With Legacy Code ... by Michael Feathers [Amazon]
Refactoring: Improving the Design of Existing Code ... by M. Fowler, K. Beck, et al. [Amazon]
Design Patterns: Elements of Reusable Object-Oriented Software ... by the Gang of Four [Amazon]
Patterns of Enterprise Application Architecture ... by Martin Fowler [Amazon]
Domain Driven Design: Tackling Complexity in the Heart of Software ... by Eric Evans [Amazon]
Head First Design Patterns by E. Freeman, E. Freeman, et al. [Amazon]


Thursday, January 3, 2013

The first right answer is the only answer

Reading through David B. Stewart's paper entitled "Twenty-Five Most Common Mistakes with Real-Time Software Development" (PDF, 131 KB).

There's an interesting nugget of advice at number 8, "The first right answer is the only answer";
Inexperienced programmers are especially susceptible to assuming that the first right answer they obtain is the only answer. Developing software for embedded systems is often frustrating. It could take days to figure out how to set those registers to get the hardware to do what is wanted. At some point, Eureka! It works. Once it works the programmer removes all the debug code, and puts that code into the module for good. Never shall that code ever change again, because it took so long to debug, nobody wants to break it.

Unfortunately, that first success is often not the best answer for the task at hand. It is definitely an important step, because it is much easier to improve a working system, than to get the system to work in the first place. However, improving the answer once the first answer has been achieved seems to rarely be done, especially for parts of the code that seem to work fine. Indirectly, however, a poor design that stays might have a tremendous effect, like using up too much processor time or memory, or creating an anomaly in the timing of the system if it executes at a high priority.

As a general rule of thumb, always come up with at least two designs for anything. Quite often, the best design is in fact a compromise of other designs. If a developer can only come up with a single good design, then other experts should be consulted with to obtain alternate designs.
As David suggests, when dealing with complex, mission critical or concurrent sections of code, that first success is often not the best solution. Weeks later, you might find that its not performing as well as it should be in production and you have to revisit the section of code again looking for a better solution. But the best time to develop a better solution was back when the job was fresh in your mind, when you developed the original solution.

So he suggests adapting this into a new practice:
As a general rule of thumb, always come up with at least two designs for anything.
 There's multiple problems with this advice:
  • YAGNI, you ain't gonna need it. Develop an extra solution only if and when you need it; don't create extra work where it might not be needed.
  • Premature optimization is the root of all evil. Knowing when you need to optimize a solution further is almost impossible to do at design time, before any benchmarking or code profiling have been done.
  • Lastly, if your solution passed the unit test, yet fails further down the line (in production for example), then that suggests there was a problem with your unit test, and not with your solution. If required, you should add some specific concurrency and/or performance testing to your unit test. This will mean you can then optimize your code, while maintaining a TDD approach.
I notice the original article is actually from 1999, which I believe is before test driven development came into prominence. I think this particular piece of advice ("come up with two designs for anything") might have been ok for some projects back then, but would now be considered flawed and certainly not advisable.