Friday, March 31, 2017


The history of the "copyleft" movement is more or less coeval with the emergence of the Web as a means and mode of sourcing, creating, and publishing works, whether texts, images, music, or multi-media productions. Some trace the movement to the desire of some computer programmers in the 1970's to create versions of programming languages and software that could be developed independently of corporate-owned commercial products. Others see its roots in the xeroxed or mimeographed "zine" culture, or in pubk-rock designers' preferences for public-domain clipart. Whatever its exact origins, a signal moment arrived in 1988 when Richard Stallman created the first copyleft license, which he dubbed the EGPL (for Emacs General Public License), which evolved into the GNU General Public License in 1989. A version of this license was, until recently, used by the Wikipedia and many other wiki- and crowd-sourced sites; in essence, it declares that the material licensed may be shared by anyone as long as a) they indicate the source; and b) the same license to share is imposed upon all subsequent users. The GNU license originally didn't contemplate "remixing" in which the GNU-licensed material might be slight, nor did it account for problems that might occur when someone tried to copyright a longer text that quoted from GNU-licensed material. In 2009, in part because of such concerns, the Wikimedia Foundation dropped the GNU license in favor of a Creative Commons license known as CC-BY-SA, which has similar attribution and share/alike, but is friendlier to quotation/remix, and doesn't require reproducing the full text of the license in every work (Creative Commons maintains human-readable and legal versions of all its licenses).

Creative Commons licenses are now quite common, and have been used on blogs, text archives, wikis, streaming music and media, and downloadable media. I haven't yet seen one on a printed book, but it's certainly conceivable that it will be on certain e-books or other reference works. They offer the advantage of a variety of licenses, which allow for a) Copying; b) Modifying; c) doing either with or without attrubution; and d) allowing commercial or non-commercial use. Creators of content can thus exercise as much, or as little control over the re-use of their material as they like. Of course, no CC licensed material has yet been at issue in a courtroom, and as a result there is no case law to indicate how effective these licenses may be when it comes to the vital question of whether and to what extent they are enforceable. And perhaps that's best, for now.

Friday, March 24, 2017

History of Copyright

The idea of an inherent right of the author of a written work to protect it from unauthorized copying is, in terms of western history, quite recent indeed. The 1709 "Statute of Anne" was the first legal recognition of the rights of an author. It presented itself as "an act for the encouragement of learning," with the implicit argument that allowing authors the exclusive right to publish their work for a limited term would enable them to earn some reward for their labors, while at the same time eventually allowing their work to be used freely. As with earlier systems of intellectual property, such as "Letters Patent," the Act's term was limited -- 14 years, which could be extended for 14 more, after which the rights of the author expired; it was understood then, as it is now, that authors, like inventors, quite frequently drew from the works of those who have come before them, and that preserving such rights indefinitely would stifle creativity. One thing that has certainly changed since 1709 is the term of copyright; US copyright eventually settled on a period twice as long as the Statute of Anne (28 years, renewable for 28 more years); revisions to this law in the past three decades have extended these 56 years to 80, 100, and even as many as 120 years; the last of these, the "Sonny Bono Copyright Extension Act," went further and even re-instated copyright in works where it had become extinct, freezing the date at which works could enter the public domain at 1923. Many creative artists feel that this law has exercised a stifling effect upon creativity; many of them joined in support of a legal case, Eldred vs. Ashcroft, that challenged these extensions on the basis of the Constitution's reference to copyright law being for a "limited term." The Supreme Court eventually ruled against Eldred, saying in effect that Congress could establish any length of term they wanted, so long as it was not infinite. Could, is of course, not should.

The result has been, ironically, that in the very age when the ability of writers, artists, and musicians to draw upon, alter, and incorporate what the copyright office calls "previously existing works" is at its greatest, the legal barriers against doing so have been raised to the harshest and longest in the history of copyright protections. This is offset, to a degree, by two factors: 1) "fair use," a doctrine established in the 1977 revision of the law, whereby a certain limited amount -- say, less than 10% of the original "work" -- may be used so long as it is not employed for profit, is used in an educational context, and/or used spontaneously; and 2) simple lack of enforceability. It's quite impossible to police all the billions of web servers, web pages, and personal computers and devices, to ensure that no copyrighted material has been taken or stored; enforcement, as a result, tends to be spotty if dramatic (as in the case of a woman in the midwest who was assessed a fine of 1.5 million dollars because her son had shared 24 music files on his Napster account).

It needs to be noted that copyright also functions very differently depending on the medium in question. Printed texts are straightforward enough, but in the case of physical media such as a sculpture or a painting, possession of the physical object confers certain property rights, including the right -- if one desires -- to restrict or prohibit "derivative" works such as photographs of these works, although the issue of non-manipulated or "slavish" copies is a murky one. Music is the most complex form: there are at least four layers of copyright in a recorded song: 1) The composition itself, and its embodiment in sheet music; 2) The performance of that composition on the recorded matter, including the act of interpretation and any variations on the composition; 3) The physical embodiment, if any, of this performance, known as "mechanical" rights; and 4) The right to transmit the performance. All of these, of course, were once separate domains: the sheet-music industry/print, the recording studio, the record company or "label," and radio stations -- but all are now merged indistinctly into a single, complex activity that can all be achieved on a single device, even a smartphone.

In Hip-hop, nearly all original samples were taken from vinyl, and most consisted of a few measures of the "break beat" -- far less than 10% of the original recording.  And yet, by cutting or looping this beat, it could in fact become the rhythm track for an entire recording.  How then to measure that, in terms of originality? Legal cases attempted to separate the "essence" of a song from its literal embodiment in part or whole -- a measure which tripped up George Harrison, whose "My Sweet Lord" was found to have unconsciously copied the essence of the Chiffons' "He's So Fine." And yet, even if the "essence" is not copied, copying enough the of the literal bits and pieces (riffs, beats, harmonies, etc.) can lead to the same conclusion of infringement. The technical term that has been developed for this is "fragmented literal similarity," and it would seem to be the measure most applicable to to use of samples in Hip-hop. But how much similarity, how many fragments, are enough?

As things have turned out, neither of the two most significant cases of copyright litigation involving Hip-hop ended up dealing with this question.  In one case, where Biz Markie's half-singing the chorus from Gilbert O'Sullivan's "Alone Again (Naturally)" was represented by his lawyers as being part of the natural creative practice of Hip-hop -- an argument which they lost spectacularly, as the recording was quashed, all copies ordered destroyed, and substantial damages awarded.  In the other case, one which reached the Supreme Court, Luther Campbell and 2 Live Crew were sued for their parodic Hip-hop version of "Pretty Woman," originally made famous by Roy Orbison.  Their lawyers quite wisely avoided the originality issue altogether, arguing that the 2 Live Crew version was a satire, an argument accepted by the "Supremes" on first amendment grounds.  After all, if we're going to use our free speech to parody or mock others, we'll have to be imitative, won't we?  It's too bad Biz's lawyers didn't make the same argument.

The most spectacular recent case was that over Robin Thicke's "Blurred Lines," which was found in a jury trial to have infringed the copyright for Marvin Gaye's "Got to Give it Up." In this case, the most unusual feature was that the judge, John A. Kronstadt of the U.S. District Court, prohibited the jurors from listening to phonorecordings, insisting that any evidence had to be based on the sheet music. For a jury, most of whose members could not read sheet music, the days of expert testimony must have been mind-numbing. Many musicians decried the resulting verdict as far too broad, in effect prohibiting any music from drawing inspiration from any past recordings, and signed on to an appeal in 2016. In an amicus brief filed by a University of Washington law professor, it was further argued that both compositions were essentially "aural" -- that is, composed and recorded without reference to sheet music -- and that the exclusion of the recordings was a fatal error. A ruling has yet to be issued.

Thursday, March 23, 2017


We are being watched. And yet, although public security cameras are the most visible and obvious signs of surveillance, there are now many more efficient ways to follow an individual person. A few years ago, a German politician, Malte Spitz, made headlines when he asked for his records from Deutsche Telekom, and found that they had recorded his exact location in terms of longitude and latitude more than 35,000 times in one six-month period. And yet this is nothing new; cell phones only function when they can be located by the cellular system; the only news was that the information was retained. Indeed, there have been many sociological studies made using anonymous cellphone data to examine traffic patterns, pedestrian flow, and other broader areas of human society. Such data exists, and it would be a small leap indeed to link it with personal information.

Fear seems to predominate, and although we may all have a creepy feeling of being followed, few want to rock the boat by complaining. Fears over terrorism, indeed, have made many people feel safer with the cameras running.In London, one of the most surveilled cities on earth, some have estimated that there is one camera for every 14 citizens -- 421,000 in London alone. And yet there have been relatively few instances of vandalism of these cameras. In more rural areas of England, in contrast, roadway cameras designed to catch speeders -- the hated "Gatsos" -- have been frequently vandalized, with the means varying from spray paint and hammers to, in a number of instances, bombs. Are rural Brits angrier than urban ones? Apparently, a speeding ticket is more hated than the idea of being watched while one shops (or else rural folk are more likely to reach for a blowtorch or a sledgehammer).

But all this belies the strange truth of how our modern Internet's Big Brother came into being: we summoned him ourselves. Like Aladdin rubbing his lamp, we asked for all kinds of goods and services, little realizing that each of our requests created a valuable little bit of data about ourselves.  We asked for the convenience of on-line banking, of getting our medical test results, of qualifying ourselves for mortgages, all without having to leave home or sign a physical piece of paper. And it's those millions of acts that have made us most susceptible to invasions of our privacy, whether by the government, corporations, or the eager army of hackers. It's turned out that we Big Brother doesn't have to bother to "watch" us -- in many instances we're freely giving our information to him.

And we all know the story about the frog in the pot that was gradually heated up. We've gotten used to some of this intrusiveness, and are more willing today than we were yesterday to give up a little privacy in return for convenience. As one sign of this: Google's "new" feature (announced this week!) allowing people to share their own locations on Google Maps is in fact nearly identical to a feature Google killed eight years ago, known as Google Latitude. Back then, privacy concerns put the kibosh on the feature, but today it seems to be as welcome a development as sliced bread.

Sunday, March 19, 2017

Hand-held Media

I suspect that, given the topic for this week, you probably weren't expecting to see .. a "transistor radio." It was the latest in hand-held media technology when it d├ębuted in 1954, in an era when a "radio" was a piece of furniture only slightly smaller than the sofa. It was co-produced by Texas Instruments, who would later be a pioneer the field of computing; four years later, its laboratory would be the birthplace of the integrated circuit, and it would produce the first hand-held calculator in 1967 -- for the low low price of $2,500! It was probably around that year that I first got my own transistor radio, complete with a single monaural ear-bud, and saw one of the TI calculators my dad had brought back from the lab at General Electric (he actually had to sign it out, since it was such an expensive piece of hardware).

Of course no one foresaw in these early days that there would come a slow, Frankenstein-like convergence which would create a new device that would serve not only as a radio and a calculator, but also as a camera, video camera, music player, and telephone. The sheer weight and size of all the devices and media that an iPhone or Android smartphone replaces would easily top a hundred pounds, and take up an entire living-room wall. Weighing in at an average of 140g (about 5 ounces), the LP's that would be needed to equal a 32 GB iPhone loaded with music (400) stacks up to 125 pounds, not counting the weight of the sleeves and covers!

The milestones along the way are worth remembering, even as they fade from our sight: the Walkman (1978), the Discman (1984), and the first iPod (October 2001, scarcely fifteen years ago, if that's possible), and the first smartphone (IBM's Simon in 1992). One could very well ask, what could possibly be next? Or will the hand-held be supplanted by the strapped-on-the-head, the wearable, or even the implanted?

Friday, March 17, 2017

Make Love Not Warcraft

Sometimes, there is a narrative which so perfectly casts into relief the connections and differences between various old and new media forms that it becomes a kind of date/time stamp for the history of media. Orson Welles' Mercury Theatre production of War of the Worlds, Gary Ross's 1998 film Pleasantville, or the recently viral video of a baby trying to use pinch and swoosh finger controls on books and magazines, are among those that come to mind. And to that very select list, one should really add episode 147 of South Park, Make Love Not Warcraft.

The episode's first stroke of genius was to collaborate with Blizzard Entertainment, which custom-produced the computer game segments, even adding features -- such as synchronizing mouth movements with speech -- not actually available in the game. The second stroke was using the South Park character voices with their Warcraft avatars (as the game would look and sound to those using software add-ons such as Roger Wilco or TeamSpeak) so that the kids' voices come out of their tough-looking overbuilt Warcraft selves. Thus there is irony in every scene, the more so when a balding, beer-bellied, potato-chip munching man wearing glasses and a carpal-tunnel brace turns out to be the big-bad fellow who is "killing everyone in the game." Blizzard executives are show being stunned to discover that this character has become so strong, he's even killing their admins -- he has been playing WarCraft all day every day since it came out, such that he must have no life whatsoever outside of the game. So, as one Blizzard exec asks in Master Po fashion, "How do you kill that which has no life?" The kids will show us the way.

Friday, March 3, 2017

Social Media II

The range and size of social media networks has increased almost exponentially in the early years of the twenty-first century. We've gone from early forums in which only a few hundred people might participate, such as a BBS or a LISTSERV list, to truly mass media such as Facebook and Twitter, which have billions of users around the globe.

But much more than just size has changed. At a certain 'tipping point,' social media begin to function in ways that, when they were smaller, would have been impossible. Facebook and Twitter have been credited as playing roles in the "Arab Spring" in the Middle East, particularly in Egypt and Tunisia; Facebook's founder has been the subject of a major Hollywood film; and twitter feeds and cell-phone photos has brought down politicians of every party, sometimes within a matter of mere hours. It certainly sounds as though these technologies have crossed some threshold, altering the fabric of reality itself -- but then, of course, one can look back at similar claims made about virtual-reality video helmets (anyone remember Lawnmower Man?) and wonder whether these revolutions will seem such a few years from now.

Three key developments have shaped this period: 1) Social media with "presence" -- a main page at which users can add or copy content, offer images, texts, or video of their own making or choosing; 2) Sites with instant linkability -- the ability of users to add (or subtract) active and immediate connections to other users; and 3) Sites that bundle essential tools (e-mail, instant messaging, and other software capabilities. Finally, all of the above, or at least the survivors in this highly competitive field, have gone multi-platform; no social medium of the future will thrive unless it is available on desktops, laptops, tablets, and smartphones, and has some system of synchronizing all its users' preferences and updates.

So what next? The spaghetti is still being hurled at the (virtual) refrigerator wall; Blippy, a site that enabled shoppers to instantly "share" posts about their purchases was hacked, and credit cards compromised -- so much for that! -- Google tried to launch its own "Wikipedia killer," dubbed Knol, but the site filled up with spam so quickly that it became almost useless, and Google discontinued it; it also failed to generate "Buzz," a hot-button social networking site that irritated users with its auto-generated list of "contacts," and Apple stumbled with Ping! an addition to its popular iTunes platform meant to enable people to share news about music purchases and performances. The latest entry Pinterest, allows users to "pin" content to one another, with a focus on bargain shopping, and has the unusual distinction that a majority of its users, in many surveys, are women. But will it go the way of the Lifetime network? And what of sites that advertise themselves as 'Pinterest for men'?

It may seem we're already "shared" too much in this era of TMI, and these social media may be reaching their limits -- but I wouldn't bet on it.