In the second week of December a wikileaked US diplomatic cable from February 2010 revealed the US ambassador’s scepticism at the motivations behind the Romani Law (Decreto Romani), nominally the Italian implementation of EU Directive 2007/65 on Audiovisual Media Services.
The cable described at some length how the law’s provisions could be exploited to the benefit of the Berlusconi’s media empire. Amongst other matters, the decree promised greater action on copyright, an area in which the Italian government had hitherto been somewhat disinterested. In fact the design of the Romani Law was driven largely by the need to restrict the commercial activities of Sky, the only effective private sector competitor to Mediaset.
From this perspective the legislation is in historical continuity with its predecessor, the Gasparri law, whose purpose was to ensure an undisturbed transition of media power in the shift from the analogue to digital framework. Yesterday’s incumbents – Berlusconi and RAI – would also be tomorrow’s. The Gasparri law was ultimately the target of a complaint procedure by the European Commission begun in 2006.
Just a couple of days after the leak, on December 17th, the Italian communications authority, Agcom (Autorità per le garanzie nelle comunicazioni), under the powers assigned to it by the Romani Law, announced new measures to be used against sites hosting materials that infringe copyright.
What is Agcom?
Agcom was established by the Maccanico law in 1997 as an agency somewhat independent of the government; of its eight members four are selected by the Parliament and the other four by the Senate. The authority is charged with overseeing infrastructure and competition in the communications sector, and even-handedness in broadcasting. Currently it is under pressure from Minister Paolo Romani to punish a program, Anno Zero, presented by Berlusconi critic Michele Santoro, on the grounds of broadcasting “claims of a gratuitous character, derogatory and seriously damaging to the dignity and decorum of eminent political personalities” on several occasions in January, ie allegations against Berlusconi in relation to soliciting child prostitutes aka the Ruby case…
Marketing Enforcement Strategies
Subsequent to Agcom’s announcement of the new measures, the ‘anti-piracy’ organization FAPAV (Federazione Anti-Pirateria Audiovisiva) held an event in Rome in mid-January to present the Italian aspects of a study commissioned on the detrimental effect of copyright infringement on employment in Europe, produced by Tera Consultants under commission by BASCAP and the International Chamber of Commerce.
As usual improbably large figures were thrown around (billions of euros and 22,000 jobs lost!) with no reference made to the provision of the underlying ‘raw data’ by the IFPI (music industry lobby) and FIMI (their Italian satellite) and a marketing company, IPSOS. No discussion of methodology either, perhaps advisedly so, as the Social Sciences Research Council (who are conducting similar investigations) had publicly criticised it when the report was initially published in March 2010. Not that any of the journalists reporting the event seemed to care: as usual they reproduced faithfully what they were told .
FAPAV had invited Nicolas Saydoux, head of French trade group and antipiracy lobby ALPA, to entertain the audience with a fairytale: how a strategy combining 3 strikes legislation and an increased range of legal products on the market had succeeded in reducing piracy levels by 85% – in less than six months!
Obviously FAPAV would like to see similar measures taken against users in Italy but for now they will have to make do with Agcom’S proposals, namely a system whereby copyright owners can complain to sites hosting their materials or linking to other sites which do, and request the material’s removal. Where no action is taken within 48 hours, the complaint is passed to Agcom, who, after examination of the offending material, will demand its removal. In the absence of compliance fines can be imposed.
To deal with sites based outside of Italy, it is proposed having checked that infringing content was available, Agcom could order providers to ban the IP or DNS so as to prevent access. Such an approach is already in use against foreign gambling sites, and notoriously also in place against the Pirate Bay – not that this has stopped many Italians from circumventing these controls on access to TPB.
What is really interesting about all this is that Agcom’s powers would not require any judicial order. There is no judge involved. Attentive readers will be struck by the similarity to the first version of Hadopi in France. Undoubtedly the positive feelings of FAPAV towards this scheme are driven by the same rationale that was behind Hadopi 1: accelerate the process of shutting down the alleged infringer by recourse to administrative rather than judicial mechanisms. Or to put it more simply, eliminate due process.
Amazingly for such a controversial system it is not being created by parliament, but rather through an administrative order on the part of Agcom, under the terms set out by the Romani decree. The proposed order was released in December and is subject to two months ‘public consultation’ prior to being enacted. A campaign has been started by an alliance of organizations including the consumer groups, lawyers, and business. In recent days they have launched a site to coordinate opposition to the measures.
In a separate decision Agcom has also decided that sites with a turnover of more than 100,000 euros per year based on user-generated content will be subjected to the same legislative requirements as TV stations – restrictions on the provision of content to minors, obligations to individuals defamed etc – and are to be treated as having responsibility for the content on their sites.
Most heavily impacted by this is youtube. In 2008 Mediaset initiated a case against youtube/google, demanding 500 million euros in damages of 500 million euros for copyright infringement of Mediaset programs on their video platforms. This resulted in two decisions against Google, in December 2009 and February 2010, regarding liability for hosting parts of the Italian version of Big Brother (Grande Fratello), a franchise owned in Italy by R.T.I.
Agcom’s decision regarding liability for user-generated content may be of significance in determining the eventual outcome, but this will also hinge on clarification of the more general liability of intermediaries in Italian law, currently a source of great confusion.
“Obsession with present-mindedness precludes speculation in terms of duration and time…. That process is due to causes which affect the mental temper as a whole, and pour round us an atmosphere that enervates our judgment from end to end, not more in politics than in morality, and not more in morality than in philosophy, in art, and in religion.”
Harold Innis, The Bias of Communication, 1954
Elisabeth Eisenstein, in her extraordinary work on the social impact of the printing press (1), emphasizes how the full ramifications of its invention did not reveal themselves immediately, but instead set in chain a series of changes that were apparent only over the longue durée. She remarks that for the hundred years after Gutenberg’s Psalter came off the press, a contemporary would not have noticed any great change. Print and scribal culture co-existed initially, and the former had to await refinement of printing techniques, changes in paper manufacture and other developments before ousting the hand-copied word. And triumphing in the battle for supremacy over the form of the word was just the beginning: the changes ushered in by print required the slow development of social practices emanating from divergent sources: theological dissent, the networking of dispersed enthusiasts of the arts and scientists, the self-interested drive of map-makers and commercial printers. Eisenstein argues that these forces combined to create the material conditions which were a precondition for the reformation and the emergence of modern science. If such dramatic consequences were not anticipated by Gutenberg and Fust’s contemporaries in the fifteenth century, it is understandable; after all this was the first communications revolution. Those witnessing the events of the (public) internet’s first twenty five years don’t have that excuse.
(1) Elizabeth Eisenstein, The Printing Press as an Agent of Change, 1979, Cambridge University Press
The RIAA and its various international affiliates have been prosecuting people since September 2003, to precious little effect, even if more than 15,000 people have been targeted so far. Simultaneously they have sought to wrest back the trade in media files by licensing their archives to iTunes et al. and running publicity campaigns to ‘correct’ the hoi polloi’s acquired taste for the free copies available online. Joined by the MPAA and games manufacturers, the recording industry have another somewhat more capricious allies in the form of at least a sub-section of the Internet Service Provider industry, many of whom have initiated unilateral policies against all their users simply so as to make their lives easier, and more profitable. The key term is traffic shaping, or as users experience it, throttling, and the results are degraded performance of targeted protocols. Have you checked your ISP lately?
Behind this practice lie a variety of intentions. The first is the desire to restrict the amount of data traffic, and as is well known, p2p is responsible for probably a majority of network traffic; some estimates claim that Bit Torrent (BT) alone is responsible for 50% of all data exchanged. Furthermore (well behaved) p2p users upload large amounts of data relative to other users, which is costly for service providers, whose model is essentially built on a client-server rather than a peer based model, and prices subscriptions based on the presumption that download will exceed upload by a factor of four. Where the ISP doesn’t have adequate bandwidth, this can cause quality of service problems (QoS) for the use of other applications.
Traffic shaping is implemented using three techniques. The first is the restriction of data transfer speeds over ports commonly used for p2p applications such as 6881-6889 for BT. This can be easily circumnavigated by changing the port settings in the client’s preferences and updating the virtual server on the router. Frequently however sophisticated methods are being used which involve packet inspection of the data. This doesn’t require storing or looking at the contents of the data to identify the content, but merely looking at the header and making an adhoc decision as to how to treat it. If packets I’m sending to peers are being dumped or lost, this will have effect how I’m treated by other peers in the network, reducing my download speed. This technique can be thwarted in some cases by encrypting the data (as is possible on clients such as Azureus and uTorrent). Lastly, where encryption and port randomization is used to evade identification, the solution is simply traffic analysis and heuristics: if I have dozens of TCP connections open to numerous IP addresses over a sustained period, well it’s not impossible that I’m getting blocks of the same file from them. Products capable of excecuting such functions are much in demand today, check out Cisco’s Network Based Application Recognition (NBAR) or Allot’s NetEnforcer.
As mentioned above there are many instances where these techniques are driven either by cost and QoS considerations. However this is not always the case. For example Clearwire – a WiMax operator in Canada, Ireland and elsewhere – restrict the use of the Voice over IP services because they are marketing their own (they also make p2p services more or less unusable). 3G operators exclude VOIP for the same reason (they’re not interested in cannibalizing their mobile phone business model). In the future we’re going to see a lot more of this, particularly as ISPs try to position themselves as the delivery point for services such as video on demand – at that point their interest in blocking the untolled distribution of movies will be transparent.
The irony in all this is that ISPs with their own infrastructure effectively received a wealth transfer from the media industries from Napster ownwards. Access to free content was a key driver between the uptake of broadband subscriptions, the point has been made ad nauseam, but 2-8MB downloads are completely otiose for those whose ambition doesn’t exceed browsing the web and using mail. Indeed the media industries continually rammed the point down the throats of anyone who would listen: fierce legislative battles were fought to determine ISP liability for the data carried, concluding in the notice and takedown provisions of the DMCA and EUCD which grant service providers legal shelter from their user’s actions. This certainly facilitated the development of file-sharing culture to the point whereby irrespective of what happens prospectively the media industry has a massive legacy problem due to the sheer quantity of material available out there, for free. So the ISPs are screwing people at both ends: users and media corporations.
Secondly all of this occurs without ISP users being given any explanation, there is no transparency. Traffic shaping systems are introduced on the sly and performace degrades from one ay to the next, causing confusion and a massive waste of time on the part of users who are convinced that they have done something wrong. there are unusually honest companies such as the Australian Exetel, but they are the exceptions. Under cover of creative interpretations of the Terms and Conditions in subscribers’ contracts they can legally get away with changing the service delivered, but it speaks oceans about the contempt they resrve for their predominantly non-technical user-base.
Lastly, my motivation for this rant is that six years ago, together with many others, I realized the potential of distributed media platforms built on p2p architectures. The ambition was not really to get free hollywood and boy-band-bumf, but rather to promote and distribute independent material produced according to another economic logic. And empirically it is beyond question that the amount of material authorised for exchange is proliferating rapidly (Creative Commons, GPL, Free Culture Defintion etc). Perhaps it sounds like net-utopianism, vintage 1995, but this is the first time in communications history that there exists an infrastructure of immediately global scope, capable of being assembled in a modular grassroots manner by the users themselves, who can decide what they want to help push. Irrespective of one’s side in the various debates around media concentration, this is a good thing. In the absence of such shared scalable infrastructures the impact of ‘disruptive technologies’ on communications freedom is reduced to the youtube fallacy: you can feel free to upload your content, but only they have the deep pockets to finance reliable streaming servers that can satisfy demand, thus they ultimately control the conduit. Over the next period there will be efficient p2p based streaming solutions, but if people have become accustomed to the return to dreary client-server relations epitomised by web 2.0/iTunes/rapidshare, the benefits will have been squandered. For some time now there have been legislative initiatives in different jurisdictions to guarantee Net Neutrality, as yet none have managed to become law.
Two months ago I read a text by some friends from Ippolita in Italy titled “The Dark Side of Google”. Therein they construct a dystopian picture of the emerging power of the ubiquitous search engine, but I finished reading with the sense that the picture painted lacked perspective. The bone I had to pick with them was that focussing almost exclusively on the privacy implications of data mining obscured the role played by the company in changing the calculus on the field of copyright conflicts.
The Old Gatekeepers: the Mass Media Model
In the age of mass media the obstacle to having a wide audience was straightforward. To broadcast one’ message required high cost machinery and expensive licenses. In the former category we have the daily newspapers with their huge printing presses and complicated logistical infrastructure, terrestrial television with its broadcast towers, radio stations with powerful transmitters, satellite operators, and cable providers with their expensive plumbing through the ground to every house. if you wanted to get your message out, you had to make a deal with them and satisfy their regulatory departments as to your legal bona fides. Access to their infrastructure of course hinged on compatability with their business plan, namely the accumulation of more subscribers or viewers for their adevertisements. In addition they assumed a conservative position with regard to borderline content, refusing to broadcast material that entailed any legal risk. Over time this latter factor congealed so as to make them wary of any programme using materials appropriated under fair use, for example, and their risk-averse approach is arguably one of the reasons for its marginalization in televisual culture.
The Changing Political Economy of the Information Wars
Previously the opponents of media companies amounted to a few small companies trying to innovate and users trying to get music and video, often for free, but certainly untrammeled by digital rights management. This didn’t make for a very fair battle, and although the hardware industry occasionally weighed in to limit the media industry’s influence – and ensure that its own business opportunities were not disregarded – they went as far as their own interests reached and no further; there was, and is, no identity between user interests and the hardware industry in general.
The axis of conflict has been well established for many years now, a fight between media companies based on selling discrete parcels of data as cultural commodities (CDs, DVDs, databases) and those building business models based on leveraging access to monetize at other points in the life of information.
At its simplest level this is exemplified by the telecoms business: there can be no doubt that the take-up of broadband was fueled by the lure of copyrighted content available for free if one had a data transfer capacity that made download times tolerable. telecoms could leverage their position as a common carrier to abrogate responsibility for the bits passing over their wires, but were in fact capitalizing on the content industry’s inability to protect ‘their’ property.
Selling increased usability
As the number of broadband subscribers reached critical mass other opportunities arose, one example being mymp3.com, offering services that allowed people greater control over use of media that they owned. They required owners of CDs to insert the original disk whilst connected to mymp3’s database and could thereafter access the files from wherever they were connected to the net; it wasn’t based on copyright infringement, but transgressed the line of control which media companies have endeavored to enforce as they lose grip on the path of music through the net. the result of course was that they were shut down, even though they were adding value to the media commodity. Mymp3.com was easy to kill because it was a small company without significant commercial power or the deep-pockets to work the litigation game.
In recent years advertising has emerged as the main means of monetizing people’s online time, from Google to myspace. Google’s every acquisition is guided by two priorities (i) domination of the online advertising market (as demonstrated by their purchase of double click and the millions of partnerships they make with users via adwords) (ii) developing new content centres that will drive eyeballs towards their virtual real estate (google books, gmail, youtube). Unlike the smaller companies annihilated by the media combines in the 1998-2003 period, Google is too powerful to be pushed around and impossible to crush. They employ some of the best and most creative legal talent out there and are cherry picking the most gifted technologists. This is why the calculus has altered. So the question is, what will the new landscape look like, what will distinguish it from the old?
Data-networks effectively eliminate the obstacle posed by the cost of equipment – a computer is both a production station and a transmitter – although transmission costs for popular programs are expensive; fortunately p2p distribution channels and facilities like archive.org can provide solutions for producers who are not concerned with getting an immediate payback.
The New Gatekeepers
Having solved the problem of broadcast-scope, producers find that the problem is now how to find an audience. This question in fact entails two elements: how people decide what they want (preference formation) and the cost of finding what they’re looking when they already know that they want it (search-costs). Industrial media manufacturers solved these questions by investing hugely in marketing and then by either buying up distributors/retailers (vertical integration) or through partnerships. The online world divides the model of preference-formation between the traditional mechanisms of marketing combined with user-based recommendation systems, driven by proprietary algorithms. The latter tend to be used by more experienced users, other people find things through scatter-shot use of search engines. Sourcing the data is the preserve of search engines, either on the web as a whole, or on a site where you could reasonably expect to find the product (ebay, amazon, etc)
Users’ reliance on search places Google in a position of enormous power, and the search engine has permeated people’s experience of the web to the point where they forget that it’s even there: surveys have shown that people often run searches for google or yahoo from the google search itself! Much has been made of the fact that the search results are decided democratically, on the basis of the pattern of links clustered around search phrases, but in fact we can only take google’s word on it. Should google decide that a site is inappropriate it simply disappears.
Wine research has been taking up most of my time of late, impeding me from commenting on recent developments in video platforms which demonstrate the deepening some already pronounced tendencies. Having launched a war against software developers from 1999 (with the litigation against Napster), and begun prosecution of individuals downloading music in september 2003 (although there had been isolated and dissimilar cases prior to that), the media wars have now gone internecine, as evidenced by Viacom’s one billion dollar suit against Google.
The latter are no strangers to litigation from copyright owners (see the various fights around google.news, their library digitization project etc) and will hardly be intimidated by their new opponents. Google, of course, are fundamentally an advertising company who have built their success through continually expanding the information available through their services and building the value of their ads through infomation-collection about their user-base. “Information wants to be free” has found its materialization in Google in a finessed version of what was the mechanism for free over-the-air terrestrial television.
The acquisition of youtube.com will eventually be seen as the bargain that it was – given the brand-value accrued through being first on the market to make an impact – and even these one billion dollar lawsuits are small beer in the context of a battle to dominate and monetise the infoscape. Meanwhile the diabolical Fox Group have announced a partnership with NBC (with Sony, Viacom, CBS and Time Warner also rumoured to be interested) to launch a rival to youtube which will be a mix of ad-supported and pay-per-view material. Yet these mooted collaborators have different agendas amongst themselves, with AOL-Time Warner attached to an internet and cable strategy, whilst Sony tries to re-center itself on electronic devices (partiularly the mobile phone). My suspicion is that these companies, having systematically screwed up their strategy since the 1990s, are on a loser again. Ask Steve Jobs, after all, to whose iTunes they handed over a large chunk of the music industry free, gratis and for nothing.
And is it not ominous for such a project that at the same moment such a plan is announced EMI reveal that they are about to make available DRM-free music via iTunes and other retailers? This represents anothing less than a capitulation to the new social mode of treating media objects by a generation of users anthropologically distinct from their predecessors who took paying for media commodities as normal. Today, if you don’t want to pay, or regard DRM as an inconvenience, the alternative solution to your needs is just a couple of clicks away, be it called Rapidshare, Bit Torrent, Emule or Limewire. Google’s ace in the hole is that free is the best means to acquire attention and thus bring down the cost-per-contact calculation determinant in the advertising industry.
If you agree with this analysis then the question is what will be the ramifications of such a conflict between Google and the remains of the media industry for users. My view is that there is a convergence of interests with Google, but that their information-collection practices must be constrained. This data-mining is generating a lot of dystopian visions right now (see ‘Google’s Masterplan‘ and ‘EPIC 2014‘), but they tend to overlook the wider context of intra-industry conflict Google is operating in, a point which is not irrelevant in terms of our interests as users.
A major obstacle to filmmakers wishing to use footage under fair use is risk aversion on the part of their contractual partners. Television stations and insurers typically take a conservative position with regard to potential disputes and require changes to the film rather than risk litigation. As a consequence, uses that are fully within the law have been unnecessarily cut, creating a vicious circle: aggressive rightsholders and a refusal to fight bogus claims pushes back the bar for fair use, and uses which should be free become subject to insane licensing fees.
Two years ago Pat Aufderheide, Peter Jaszi and the Center for Social Media at Washington University began working with documentary filmmakers to develop a set of best practices on fair use in November 2005. These have had a special value due to being the product of peer-consultation in the film community and have begun to influence commercial practice elsewhere in the product cycle, including broadcasters such as Independent Television Service and Discovery.
Now it has been announced that an errors and omissions insurance company, National Union, will offer a policy that will include fair use coverage providing a copyright lawyer attests to their belief that the use stands within the terms of the exception. This should make it a lot more difficult to intimidate filmmakers into self-censorship and enable the taking of test cases that can invert the current protectionist tendency.
Similar best-practice prcesses are underway in other sectors, such as music educators, in collaboration with writer and activist Kembrew McLeod.
The next days will see the launch of a remixing competition in the UK titled Mix’n’Mash. Particpants have to produce a three minute piece which will be published under a Creative Commons Non-commercial Share Alike licence. I’ve been researching guidelines for the contest and have been looking at some of the recent works boldly employing fair dealing/use so as to assess where the line now falls. Two cases particularly caught my attention.
The first is The Pervert’s Guide to the Cinema by Sophie Fiennes and Slavoj Zizek, which despite its suggestive title is really a psychoanalytical history of film. here we have the pleasure of watching Zizek meditating while he sits on the toilet or placed on the scene of a crucial moment in movie history. With considerable zest he outlines the conflicts played out between id, ego and super-ego, and illustrates the argument profusely with clips taken from about fourty movies. The segments used ar mostly between 15 and 30 seconds, and each one is accompanied by the names of the director, copyright owner etc. In any case the film has had a lot of difficulty getting distribution in the United States because of the fact that the clips used weren’t cleared with the rightsholders, and rly upon the fair dealing exception for critisim and review. Fortunately this didn’t deter Channel Four in the UK, who have experience in litigating this subject (a 1994 case about a documentary on Kubrick’s A Clockwork Orange), who broadcast the film.
The second case regards Kirby Dick’s latest offering This Film Is Not Yet Rated, which is a critique of the MPAA’s rating process so vital in enabling access to mass audiences in US cinemas. The identities of the ratings committee are secret and theere is no transparency to the process, so Dick hired private investigators to follow them and build up a picture of their attitudes. As the director of work that frequently deals with the politics of the body, he was particularly interested in the marginalization of films depicting or celebrating non-heterosexual pleasure, and really the film is overtly an attack on censorship. Audaciously Dick decided not to seek clearance for the many clips he used, so the film’s form becomes a challenge to that other form of private censorship: the arbitrary power of the copyright owner. Amsuingly he then submitted the work to the Ratings Board who gave it a NC 17 stamp, but the process requires that a director make no changes after assessment and he did, thus the name of the film. Unsurprisingly then the movie has not had a theatrical release in the US but ahs circulated widely in film clubs, festivals etc. next week it will be released on DVD. There’s an interesting article on the subject here, including discussion of a several other titles.
Another directive, COD/2005/0260, is currently snaking its way through the EU legislative process. As always, the Commission is using it as an opportunity to showcase snappy phraseology, as demonstrated by the title: “Television broadcasting services: simplify the regulatory framework for broadcasting or linear services, introduce minimum rules for non-linear audiovisual media services (amend. direct. 89/552/EEC, règl. 2006/2004/EC)”. You’ll need the help of a jargon-killer, or perhaps the the wikipage created by the Open Rights Group (UK).
Television Without Frontiers (TWF) was passed in 1989 and set out rules governing television broadcasts in the single market. Matters covered included:
- mandatory quotas for european-produced content
- mandatory quota of 10% transmission time/budget for independent producers
- limitations of time alloted to advertising (15% of the day, 20% maximum in any hour)
- protection for children
- restrictions on alcohol advertisements.
At the heart of the directive lies the country of origin principle. This meant that a broadcasteer needed to comply with the law as set out only in their jursidiction – they did not have to tailor their products for the various national regimes. How to determine the location of a station’s ‘origin’ was clarified in a 1997 amendment stipulating that for the these purposes it would the country wher the channel had its main office.That amendment also enable national legislators to require the free over air broadcasting of cultural events considered to be of vital natioanl importance – thanks to this clasue we get access to important sporting events which would otherwise be monopolised by the pay-per-view merchants.
The current proposal will relax controls on non-linear on-demand style services, and introduce new elements as well as amendments to the existing reglation:
- product placement will be brought under an EU wide regime
- the rules on advertising interruptions will be relaxed
- extension of country of origin principle to non-linear services
- end of focus on tv transmission; the effects of the directive will now be extended to the delivery of media over devices including mobile phones and the net.
The current amendment process kicked off in 2005. The novelty in the latest version is the attempt to cover all audio-visual services and abandon the technology specific language that treated the boob-tube as the means by which people would get their eye-candy forever. Having gone tech-neutral, the key distinction determining the type and extent of regulation is whether you are like a traditional broadcaster – pushing fixed-schedule content at mass audiences – or non-linear where the user can pull down material she selects.
- In October the blogosphere eruptedmomentarily over the directive based on the concern that the new law would be applied to the non-commercial and user-generated sphere of content. That ambiguity has apparently now been eliminated and the directive will only apply to clearly commerical operations.
- In Britain everyone seemed to be against the directive, including Ofcom, former minister James Purnell MP and the Digital Content Forum. The gebneral thrust of these objections is that the directive is premature given that the market in mobile and new delivered content is in its infancy, and that deregulation would be preferable.
- The European Consumersorganization, BEUC, is (a) opposing the parts of the directive relaxing restrictions on advertising (b) fighting the sanifiucation of product placement and (c) lobbying to ban the marketing of foods high in fat, sugar and salt to kids.
The Commission original proposal; another version annotated with the Parliament’s amendments can be seen here, whereas you can track the legislative process at the Observatory. The Audiovisual and media Policies unit at the commission has a page providing something of an overview of the process so far. Basically the Parliamnet approved it on a first reading, now it will return to the Council before heading back to the prliament for a second reading some time this year.
An interesting article on an Italian IT news site, Zeus, reveals that the national broadcaster, RAI, has proposed to make available all programs produced using public funds online for download under a Creative Commons license. The proposal forms part of the new public service contract stipulating the organization’s obligations in exchange for receipt of the television license, currently set at 104 euros per year. This will require RAI also to acquire the rights necessary to allow transmission via internet, and rights restrictions will also mean that the programming is available only to users accessing their web-site from inside the country. In this respect, RAI are following the lead provided by the BBC, demonstrating the latter’s ability also to serve as an important example to other broadcasters. The public service contract will have to be reviewed and approved by a committee composed of members of the Italian Senate and parliament before being officialized.
Meanwhile the BBC completed the pilot period with its creative archive and now awaits charter approval before making materials available. In October the German broadcaster ZDF announced its intention to emulate the creative archive (in German), and are hoping that it will lead to the wide circulation of their materials on video platforms and eventually win them new users, particularly amongst the young people most attracted to these sites.
- Demystifying AdTech
- The Hymn of Acxiom
- Knowledge is born free, yet is everywhere in chains…
- Adam Curtis in Berlin
- Baking Privacy and User Choice into the Web with Do Not Track
- Party Like it’s 2000: Revisiting Crypto
- test on IP and the economy etc
- Copyright Trolling, Streaming and The Archive AG v Redtube Users
- Snowden and the Cave
- Chin Drops the Bomb: Fair Use For Google Books
- Snowdenmania in Marzahn (bei Berlin)
- civil liberties
- European Court of Justice
- european directives
- european regulations
- european union
- material culture
- open video
- Pirate Bay
- Pirate Party
- social cooperation
- steal this film