My Code is Better than Yours

My Computer Language is Better than Yours — Medium

If you are a very large, rich technology company today, it seems it is no longer enough to have your own humongous data centers, luxurious buses, and organic lunch bars. You need your very own programming language, too.

Google has Go, first conceived in 2009. Facebook introduced Hack last spring. And Apple unveiled Swift not long after.

In war, as George Orwell had it, the winners write the history books. In tech, the winning companies are writing the programming languages. The Internet was built on open standards and code, but the era of social networks and the cloud is dominated by corporate giants. And they are beginning to put their unique stamps on the thought-stuff of digital technology — just as inevitably as William the Conqueror and his Normans imported tranches of early French into the nascent English tongue, in ways that still shape our legal and financial language. (Something to think about the next time you make a payment on your mortgage.)

The new languages give programmers some helpful legs up, to be sure. Google’s Go is structured to simplify the work of making code run “concurrently,” smoothing the way for programmers to create and juggle portions of a program that execute simultaneously — and thus take full advantage of today’s multicore chips and multiprocessor machines. Apple’s Swift offers iPhone developers some of the terseness and agility of popular Web scripting languages, such as PHP and JavaScript. Each comes with its own logo, too: Swift a stylized bird, Go a goofy gopher.

Neither of these projects aims to blow up the status quo. Instead, they’re smoothing out wrinkles and optimizing code for the dominant waves of today’s technology. If we want to know what it means for our digital lives when big companies control and shape the very languages in which technology is developed, that’s one clue. If this is programming’s Age of Imperialism, should we sing along or raise our fists?

Let’s start with how Google banished semicolons and embraced braces.

Google GO

Google GO

The Essence of Go

Ken Thompson, Rob Pike and Robert Griesemer, three coding gurus at Google, dreamed up Go in 2009 while — as they only half-jokingly say — waiting for their C++ and Java code to compile. These widely used workhorse programming languages were getting pokey, particularly when hitched to the kind of massive programs Google deploys. Every time you added or changed something, you had to wait for the compiler to “build” the binary version — to boil your code down to its machine-readable essence.

“Builds were taking 45 minutes,” Pike explains in one of his many talks evangelizing Go. “I considered that painful. When builds take that long, you have a lot of time to think about what you might do better.”

Designing a programming language is all about tradeoffs — between what’s easier for the programmer and what best suits the machine.

Crafting code that runs fast demands more effort from the programmer. How much time and energy should humans devote to writing code that runs swiftly? How much busywork and heavy lifting do you instead let the developer hand off to the computer? Another major tradeoff lies in the amount of direct access to machine memory that the language provides. Here, as in so many other places, the language inventor must choose: How much freedom do you give programmers, knowing they might screw up? How many pillows do you surround them with to cushion their stumbles, knowing that each one you add will slow programs down?

The undertaking of language design is Miltonic, you see: formal, majestic, riddled with dilemma and paradox. There’s no right answer — just different choices to suit changing hardware, changeable users and picky programmers.

Go’s creators had plenty of experience making such choices. Thompson co-invented Unix; he and fellow Bell Labs veteran Pike devised the style of character encoding, called UTF-8, that most of the Web uses today. So they knew that little decisions can have big consequences. Every rule added today could mean gazillions of future keystrokes for the programmers of tomorrow; every rule omitted could mean countless crashes to come.

For instance: Programming languages commonly use semicolons to separate statements; braces group related statements together. Here’s the classic “Hello, world” program in the venerable C language:

main()
{
printf(“hello world”);
}

Braces were essential, Go’s creators felt. Some languages, notably the popular Python, have tossed them aside, allowing programmers to use indentation — white space, or “invisible characters” — to lay code out for both the human eye and the machine. The Go team believed that was a “profound mistake.” Braces meant programmers could tell the computer, explicitly and unambiguously, how to chunk code in larger blocks. (At one meeting with Sergey Brin, the Google founder suggested Go’s designers use square brackets rather than curly braces, saving developers countless trips to the “shift” key. “He didn’t win every argument,” Pike recalls.)

So braces made Go’s cut. But in December 2009, the Go brain trust decided to stop requiring programmers to end statements with semicolons. “Semicolons are for parsers” — behind-the-scenes tools that break programs down into chunks of related code — “not for people, and we wanted to eliminate them as much as possible,” their FAQ now explains. Henceforth, the language’s machinery would “inject” the semicolons for you after you handed it your code.

Go’s great semicolon purge saved labor and eyestrain. But in order for the semicolon injections not to go haywire, programmers would now have to deploy their braces with a tad more rigor — otherwise, a semicolon might get injected in the wrong place.

These choices are not without controversy. “They poisoned the language with redundant braces!” complained a commenter on one of Pike’s lectures. The language could just as easily have been designed so that mere white space served the same role as braces in breaking up different segments of code. To which Googler Andrew Gerrand responded, “At scale weird shit happens every day. That means that, semi-regularly, someone will sneak an invisible character into the code base that causes a subtle bug. This has happened more than once in Python programs at Google.”

Just as William Blake imagined seeing a world in a grain of sand, a programmer can see a punctuation mark as a door between dimensions. For the rest of us, of course, not so much.

However headily syntax may intoxicate the programmers who fill software forums with ardent disputes over its nuances, what interests most people about Go, or any other language, is the “superpower” that makes the language fly. For Go, that would be its approach to concurrency.

Unlike the languages we speak — what programmers call “natural” languages, ones that emerge in the wild over time — programming languages are crafted for specific ends and uses. Go, as Pike puts it, is “designed by Google to help solve Google’s problems. Google has big problems… We needed a language that made it easier for us to get our job done, and our job is writing server software.”

Google runs its very own global supercomputer in the cloud, and that is precisely the kind of computing Go is optimized for. But Google has never made a cent selling software, and Go has been a free, open-source project from the, er, get-go. That has helped it quickly make its way into the technical arsenals of other outfits. It is becoming, as an analyst at the Redmonk consultancy put it, “the emerging language of cloud infrastructure” — because, in 2014, every platform could use a little extra efficiency and oomph on the server side.

And it’s catching on. For instance, Dropbox has moved most of its backend code from Python to Go. And Automattic, the company that runs WordPress.com, has begun tinkering with Go as well, even though WordPress itself has always been in PHP, a 20-year-old scripting language. I talked with Demitrious Kelly, an Automattic developer who has begun to use the language. “There’s a dozen new frameworks and methodologies and whatnot a week these days, it seems,” he says. “Everything is a new killer something. You have to ask: Is it better than what we have? But that in itself is a complicated question. Better how? What does it let us do that we couldn’t do before? And is it worth the hassle?” Kelly says Go fares well on these tests, in part because the language is small: “Go is actually really very easy to pick up for a week, bang out a project, put back down, and go back to PHP.”

Given that Go was designed with Google’s particular problems in mind, the syntactic choices—the semicolons-and-braces philosophy—may seem like a “how many angels can dance on the head of a punctuation mark” kind of question. Yet these matters are not so trivial. It takes a passion for detail and, typically, a willingness to flout tradition for a programmer to bring a new language into the world. What may ultimately drive a language’s adoption is its designers’ studious attention to the rough spots of everyday coding — what programmers everywhere call their “pain points.”

Apple Swift

Apple Swift

The Origin of Swift

Every programming regime has such pain points. But the rapid rise of iOS, the iPhone operating system, has given developers more than the usual quotient. Until the advent of Swift this summer, if you wanted to write a program for iOS, you had to use a language called Objective C. Steve Jobs’ Next had adopted Objective C in its youth, in the ’80s, and after Jobs’ return to Apple the language grew to become Apple’s workhorse tool for Mac OSX; when iOS came along, Objective C moved right in with it.

Today developers say the language is showing its age. “Apple had decades-old cruft in the face of anyone who wanted to write for any of their platforms,” says Andy Hertzfeld, a software veteran who wrote much of the original Mac operating system and recently retired from Google. “I got pretty excited about Swift when I saw the announcement, because I’ve always despised Objective C. I like the principles behind it, but I hate the syntax, and have never been able to really enjoy programming in it.”

Apple entrusted its next-generation programming-language project to a computer scientist named Chris Lattner. He had won acclaim as the leader of a powerful and popular open-source project called LLVM, which is a kind of toolkit for writing compilers that can run on disparate platforms. (Both Apple and Google make extensive use of it.) After joining Apple in 2005, Lattner continued working on LLVM and related projects, then disappeared from view for a couple of years — to emerge last June with Swift at Apple’s Worldwide Developer Conference.

Swift aims to be “the first industrial-quality systems programming language that is as expressive and enjoyable as a scripting language.” In other words, Swift is promising that you’ll be able to write crash-resistant code that runs fast without having to break a sweat. And you’ll be able to do it with the instincts and habits of a Web developer circa 2014, rather than having to wrench your brain back into the ‘90s, or earlier.

Cue loud cheers from legions of iOS developers and onlookers. “Beautifully done,” says Hertzfeld. “It relieves enormous pain points right in everyone’s face. So the only iOS developers who are not going to get on top of Swift are the dumb ones.” Since Swift is built to co-exist with Objective C code within the same project, toe-wetting is easy, even for developer sticks-in-the-mud.

But if you sign on for Swift, you are buying into an entire universe that is shaped and owned by Apple. You will develop your programs inside toolboxes built and sold by Apple; you will run your programs on Apple machines, and have to rewrite your code in another language if you want it to run anywhere else; your fate is joined at the hip with Apple’s.

“You have to commit to the walled garden,” Hertzfeld says. So he’s resisting the temptation to work in Swift — though, he adds, “If they had an open source implementation and had shown a little bit of interest in making it cross-platform, I probably would.”

An open-source version of Swift would mean developers could find ways to port programs to different platforms and wold provide some assurance that Swift could have a future even if Apple lost interest down the road. Developers who have been burned in the past by sojourns in other “walled gardens” often care deeply about this. And Apple isn’t completely allergic to the open-source approach, though it appears determined to hold tight rein over the iOS world. Shortly after Swift’s announcement, developers on the (fully open-source, cross-platform) LLVM project began pestering Apple and Lattner on Swift’s closed-off nature. Lattner responded:

Guys, feel free to make up your own dragons if you want, but your speculation is just that: speculation. We literally have not even discussed this yet, because we have a ton of work to do to respond to the huge volume of feedback we’re getting, and have to get a huge number of things (e.g. access control!) done before the 1.0 release this fall. You can imagine that many of us want it to be open source and part of llvm, but the discussion hasn’t happened yet, and won’t for some time.

Sorry to leave you all hanging, but there is just far too much to deal with right now.

By now, Swift’s 1.0 release has come and gone. I could not pierce Apple PR’s cone of silence to get further comment from Lattner. But a note such as this gives some sense of the struggle between openness and ownership that may be playing out in his soul, and Apple’s. (Peter Wayner provides a usefully exhaustive rundown of the issues in InfoWorld.)

Swift hasn’t been around as long as Go, so most developers have yet to kick its tires. In any case, its future in Apple-land is secure — it’s the trust-fund baby of programming languages. If Apple says Swift is the future for a billion iOS devices, then it will be the future. That inevitability, really, is its superpower. People like David Wheeler, an independent iOS developer in Portland, Oregon, will adopt it, not only because they have little choice in the long run, but because it makes sense. Wheeler says Swift took him by surprise; he figured Apple would just keep patching new improvements onto Objective C. “It has great promise, and I’m excited to see where it goes — I expect to write my first app in it within the next few weeks.”

But elsewhere its uptake will be problematic. That’s because Swift inherits so much from Apple’s DNA: As so many Apple creations do, the language creatively bridges worlds — in this case, those of systems programming and scripting. But it protects those beautiful bridges behind an impenetrable moat.

MyCodeisBetterthanYours-alllanguages

The Language Instinct

There’s nothing terribly new about spawning programming languages at large technology businesses. The dominant languages of the mainframe computer era had similar origins: FORTRAN emerged from IBM, and COBOL was largely based on Grace Hopper’s Flow-matic, created for Remington Rand’s Univac. In the 1990s, Sun gave us Java; in the 2000s, Microsoft gave us C#.

The truth is that the overwhelming majority of computer languages are products of big institutions — corporations or universities — because they have to be.

“Birthing a new programming language takes a lots of resources,” says Hertzfeld. “It’s a decade-long project to get a new language fully tooled and established and used. You can’t do it as a small company.”

Despite the impediments, the lament that there are “too many languages” has echoed through the computer industry at least since the early 1960s, when the Association for Computing Machinery first put a tower of Babel on its journal’s cover. And the lament is as futile as ever today. Programmers are unlikely to stop devising new languages or agree on one to share because — as Alex Payne, an early developer at Twitter who co-founded an “emerging languages” conference, puts it — “There’s no incentive. The history of language is filled with standardization efforts that went terribly, terribly wrong — wasted a ton of time and didn’t really produce results that anyone was happy with. I think it’s going to be a Tower of Babel for a while longer.”

(I don’t mean to ignore Hack, the new language that Facebook has developed. Nothing Facebook does should be ignored. But even though Hack is open source and essentially a variant or extension of the widely used PHP language, it has not yet fostered much enthusiasm outside the company. No doubt Facebook would like to see that change, but it’s not something the social network is aggressively pushing. The most positive reaction to Hack these days outside Facebook is “wait and see.”)

Not a single developer I talked to for this piece felt strongly that the new wave of programming languages represents a competitive power play on the part of the companies sponsoring them. Instead, they point out, every new language begins as an obsessional seed in the brain of an individual or small group: This has always bugged me. We can do better. Anyway, it takes patience and effort to learn a new coding language; developers choose carefully. Says Payne: “What I look for more when picking a new language is the other people who are flocking to that language — because those are the people you’re going to be dependent on for libraries, for documentation. You want to know if you’re moving into the right town, I guess.”

One thing we can say with some confidence is that these new languages are good. They help make programmers’ lives easier. They streamline the craft of programming. They incorporate promising new ideas. And they earn respect from developers inside and outside the corporate tent.

For all these reasons, imperialism is probably the wrong historical comparison to make for this wave of new programming languages. Instead, we’re talking about something more like what foreign-policy types call soft power: the cultivation of influence by example, diplomacy, outreach and the spread of your worldview. In very specific ways, both Go and Swift exemplify and embody the essences of the companies that built them: the server farm vs. the personal device; the open Web vs. the App Store; a cross-platform world vs. a company town. Of all the divides that distinguish programming languages — compiled or interpreted? static vs. dynamic variable typing? memory-managed/garbage-collected or not? — these might be the ones that matter most today.

In other words, the real reason for anyone to worry about the world of corporate-bred programming languages is probably not, “OMG they want to take over the world!” Rather, it’s that, no matter how big they grow, they will always be shaped by their roots.

The thing about programming languages is that, once they get into programmers’ heads, you never really know where they’re going to end up. The object-oriented programming enthusiasts who created Objective C in the ’80s could not have known it would become the programming language of necessity for a massive global ecosystem of mobile devices a quarter-century later. When Sun rolled out Java in 1995, everyone thought it would be a dandy tool for building browser applets that made images dance, yet its destiny was mostly server-side. Meanwhile, Javascript, which was released simultaneously and then widely ignored, makes most of the Web move today.

For developers, then, choosing a language is like choosing citizenship in a country. You’re not only buying into syntax and semantics. You’re buying into economics and culture, the rules that shape how you earn your livelihood and the forces that channel your hopes and dreams.

As they used to say in a dead language that once ruled the world: caveat emptor.

Source: Medium

You may also like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.