Intellectual Property – DocBug https://www.docbug.com/blog Intelligence, media technologies, intellectual property, and the occasional politics Mon, 25 Sep 2023 20:24:16 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 Laundering Copyright At Scale https://www.docbug.com/blog/archives/1094 Mon, 25 Sep 2023 20:24:16 +0000 https://www.docbug.com/blog/?p=1094 The Author’s Guild just added their own class-action lawsuit against OpenAI, claiming that using their copyrighted works to train ChatGPT violated their respective copyrights. This is essentially the same argument made in two other lawsuits filed a few months ago and in the class-action lawsuit filed by artists against Stability AI. As I said with the Stable Diffusion case, case law suggests that training an AI is fair use, though it’s far from settled. Either way I’m sure the big players are busy training “clean” models using only public domain and licensed content (particularly content they already “own”), both as a hedge and because uncertainty about fair use will naturally tamp down any competitors who don’t have the resources to make their own clean versions. There’s already word that Getty Images is partnering with Nvidia to create it’s own generative AI system trained only on it’s own library, and I’m sure they aren’t the only ones.

But don’t expect clean training data to make artists and authors any happier, because this whole debate isn’t really about how these models were trained — it’s about what they can do. Copyright law protects a fixed expression of an idea — the words on a page, placement of ink in a drawing, even composition of a photograph — but not the idea itself. That’s by design, because art inherently builds on what came before it, “stealing” the best ideas and remixing them into something new. If copyright were extended too broadly we might never have seen another detective story after Edgar Alan Poe’s The Murders in the Rue Morgue, or another Pointallism painting after Georges Seurat’s A Sunday Afternoon on the Island of La Grande Jatte.

In general artists are comfortable with this kind of “stealing” so long as it pushes the art in new directions. As TS Elliot said, “Immature poets imitate; mature poets steal; bad poets deface what they take, and good poets make it into something better, or at least something different.” Austin Kleon, author of Steal Like An Artist, put it more succinctly: “Imitation is not flattery. Transformation is flattery.” Copyright law tries to capture this distinction between good stealing and, well, just plain stealing by requiring that there be “substantial similarity” to a copied work for there to be infringement, and by carving out fair-use exceptions for reasonable sampling in transformative works. Factual works have to be especially similar to be infringing, to the point where it’s perfectly legal (and as long as credit is given, perfectly acceptable) for newspapers to rewrite their competitor’s reporting in their own voice. The similarity threshold for what counts as infringement for art and fiction isn’t quite as high, but it’s still legal to copy an artist’s style and general form as long as there are enough differences overall.

The hope, presumably, is that this similarity threshold is a way to allow good copying and outlaw bad copying without forcing judges to decide on the artistic merit of the changes that were made. But what about works that don’t really add anything useful to a prior art but still tweak it just enough to avoid copyright infringement. Take much of what comes out of content farms like Demand Media (eHow.com, answers.com), which is essentially regurgitated content from blogs, Reddit and Wikipedia with just enough rewriting to pass copyright muster, or at least to pass the filters that Google uses to deprioritize such low-value-added content in search results.

In theory content mills could add value above and beyond the original, but the business model only prioritizes quantity and high search-engine rank (preferably higher than whoever you copied from). In the early 2010s these sites relied mostly on severely underpaid contractors to churn out blog posts for pennies per word, but nowadays more and more of this work is being handed over completely to generative AI. For example, take Content At Scale, who advertise a service that uses generative AI to write a search-engine optimized blog post or article based just on the set of keywords you want to rank for in web searches. Or they can write articles based on your competitor’s content: “Have a competitor that’s crushing it with their content marketing? Or have awesome thought leaders or content sites in your niche? … Take any existing article, and have a freshly written article created that uses the source URL as context for the all new article.” They can also go straight from podcast or YouTube video to blog post, and just in case you missed what this was really about they advertise that one of their advantages over existing content mill services (besides price) is that they automatically integrate scans to make sure their posts aren’t tagged for plagiarism or AI-written content.

Rewriting someone else’s material either to avoid copyright infringement or to avoid its detection is being called copyright laundering, with analogy to money laundering. But unlike money laundering as long as you change enough to pass the substantial similarity threshold it’s perfectly legal. And it’s also not just news articles and blog posts that are being generated anymore. Just last week Amazon announced that they were reducing the number of books an author could self-publish on Kindle to three books per day because of AI-generated content.

No wonder authors are pissed!

]]>
A few more points on that lawsuit against Stable Diffusion https://www.docbug.com/blog/archives/1043 Sat, 04 Feb 2023 22:24:44 +0000 https://www.docbug.com/blog/?p=1043 One of the big claims in the class-action lawsuit against Stability AI is that Stable Diffusion in some way contains all its training data, and is therefore a derivative work it its own right:

Because a trained diffusion model can produce a copy of any of its Training Images—which could number in the billions—the diffusion model can be considered an alternative way of storing a copy of those images. In essence, it’s similar to having a directory on your computer of billions of JPEG image files. But the diffusion model uses statistical and mathematical methods to store these images in an even more efficient and compressed manner.

Case 3:23-cv-00201 Document 1, p. 75(c)

That description really misses how Stable Diffusion works. Every generated image is the product of three things:

  1. Model: In SD’s case the model is a fairly standard neural net architecture that was trained using the LAION-5B training set. The training set includes about 5.85 billion images (about 240 terabytes), but the model itself is only 5.2 gigabytes. Artists will also fine-tune existing models with their own additional training sets.
  2. Artist guidance (text prompt, hyperparameters, workflow, etc.): Any text (and even no text) will produce an image, but the artists who are getting good results from SD often spend hours figuring out complex prompts to get exactly what they want. There are also numerous hyperparameters, more advanced workflows using in-painting, and bootstrapping techniques using img2img.
  3. Randomness: All generated images start with a random field of pixels to bootstrap the process. This randomness is usually specified by a large random number called the seed, and artists will often produce tens or hundreds of random variations to get something that matches their vision.

These three elements define the space of all possible generated images, and in theory (and often in practice) if you know the model, prompt and seed used to generate an image you can recreate it exactly. But even though all three elements contribute to a generation, the amount of contribution can vary. Run SD without a text prompt and you get a wide range of almost-images, from uncanny-valley selfies and half-melted cityscapes to architecture that appears to incorporate lawn furniture in its window dressings. In this part of the space it’s almost like wandering through Borges’ fictional Library of Babel which houses books containing every possible combination of letters, except in this case the output at usually at least somewhat recognizable. In another part of the space are the gorgeous generated by artists who spent countless hours honing their prompts, hyperparameters, choice of models and workflows. And while I seriously doubt the complaint’s claim that you can generate any arbitrary instance from the the 5.85 billion training images, it is true that some training images can be generated reliably with just a simple prompt. This is called memorization in the machine learning field (essentially a form of overfitting), and it’s more likely to happen for images that are duplicated many times in the training set, or when there are only a few images that fit a particular part of the image space. For example, here’s the original Starry Night alongside one of the four images I got from stablediffusionweb.com when I used the prompt “Starry Night”:

Starry Night (original)
Starry Night (Stable Diffusion)

It’s not just public-domain art — a recent paper (not yet peer reviewed) generated images with SD using prompts from the original training set and were able to produce a significant number of images that showed at least some copying from other training images (interestingly enough, often the copied images were not the ones that had the caption used in the prompt). Some of these images are generic enough I’m not sure I’d call it “copying” (e.g. a close-up of a tiger’s face, or a clip-art map of the USA) but others are clearly duplicating a training image that was probably replicated many times in the training set. For example, the prompt “Captain Marvel Poster” generates images that are all very similar to the poster advertising the 2019 movie, and of course prompting with the name of your favorite comic book superhero will generate images that, if published, would almost certainly constitute copyright infringement. I expect the plaintiffs will focus mainly on this part of the space.

Philosophers can (and lawyers will) argue whether the fact that SD can produce copyrighted works means the model itself somehow contains derivative works. In the meantime companies are already working on models that are trained only using “clean” training sets comprising public domain and explicitly licensed works, both both as a hedge against whatever the courts decide and because commercial artists are very reluctant to use a tool that might put the copyright status of their own works in doubt. And since “style” isn’t copyrightable, model providers can always hire artists to paint original images “in the style of <foo>” that are then used to train with a generic (or even proprietary) style name of their own.

I’m no lawyer but it seems like the most straightforward decision would be to treat the AI the same way you would treat a human collaborator. So like with a human assistant you can train on copyrighted works and can generate new works in the same style as copyrighted works, but if a generated work shows substantial similarity to a protected work that was either known by the tool user or in the training set then that’s infringement. That way all our existing understanding of copyright and fair use (such as it is) stays in place, and tool-makers have an incentive to put in safeguards to prevent inadvertent copyright infringement but ultimately the responsibility lies with the artist using the tool.

On the flip side, the plaintiff’s theory that AI image generators are simply “complex collage tools” that produce derivative works seems to beg a whole host of follow-up questions. For example, if a model contains both public domain and copyrighted works, what’s the criteria for which training images a generated image derives from? If a model itself is a derivative work, does that mean a model trained only on licensed material is in itself protected by copyright? (This is probably not currently the case, though it’s not been tested in court as far as I know.) If all generated images are derivative works, does that mean the owner of an AI model trained on licensed material has a claim on every image the model produces? And perhaps most importantly, is there any way to “remove” a work from a model without having to retrain from scratch, e.g. when publishing a model in a country with different copyright restrictions than where it was trained, or where the ownership of a particular image is challenged?

]]>
The copyright battles over AI art start in earnest https://www.docbug.com/blog/archives/1041 Fri, 20 Jan 2023 01:15:10 +0000 https://www.docbug.com/blog/?p=1041

Well, the long-anticipated copyright battles over AI-generated content have finally started. Last week a group of artists announced they are suing Stability AI, Midjourney and DeviantArt for using their artwork (and that of literally millions of other artists) to train their machine learning systems, claiming doing so violates their copyrights. And yesterday Getty Images announced their own lawsuit against Stability AI, arguing that “Stability AI unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty Images” without a license. The class-action complaint makes several claims, but the most important ones are:

  1. Training an AI is not fair use: using an image to train an AI is not fair use, even when the works were published on the web and no copy is ever published. This appears to be Getty’s main argument as well, although in their case I expect they simply want to pressure Stability to pay them a license to make the case go away.
  2. Stable Diffusion is itself an infringing work: The complaint claims that Stable Diffusion actually contains all 5 billion training images in compressed form, and is thus is a derivative work in its own right. In their words, SD and similar systems are “is a collage tool, only capable of producing images that are remixed and reassembled from the copyrighted work of others.”

Case law has at least suggested that training an AI is fair use, but it’s far from settled. As for the second point, there’s no question that Stable Diffusion can generate images that, if published, would infringe on someone’s copyright and/or trademark (e.g. try entering “Batman” into Stable Diffusion’s generator). What the court will have to decide is whether the tool itself is more like a box of colored pencils (capable of creating infringing works but not infringing in their own right), or more like a bunch of superhero stencils you can mix and match.

In the end I’m not sure this will be much more than a speed bump for this technology. The technology has been proven well enough for companies to invest the resources to make a “clean” training set from public domain and licensable content, and from there individual companies and studios will train their own models to produce their own proprietary “style”.

]]>
A big win for open access https://www.docbug.com/blog/archives/1028 Fri, 26 Aug 2022 17:21:02 +0000 https://www.docbug.com/blog/?p=1028 The White House Office of Science and Technology Policy (OSTP) just announced that all publications and supporting data stemming from federally-funded research must be soon also be made available to all, without an embargo period or cost. The Open Access movement has made a lot of headway since Aaron Swartz‘s early activism, and since 2013 such publications had to be made freely available within a year of first publication. This new policy will eliminate the one year embargo. The accompanying memorandum points to how having immediate access to COVID-19 research greatly accelerated our progress against the disease, and that “the insights of new and cutting-edge research stemming from the support of federal agencies should be immediately available—not just in moments of crisis, but in every moment.”

Ars Technica has more background, and says the announcement came as something of a surprise.

]]>
Men at Work vs. Kookaburra: How many other copyright land mines are out there? https://www.docbug.com/blog/archives/814 Thu, 04 Feb 2010 18:41:57 +0000 https://www.docbug.com/blog/archives/814 A couple years ago, the Australian quiz show "Spicks & Specs" asked its panelists to name the Australian folksong that could be heard in a popular hit single that was first released in 1979. The answer: "Kookaburra Sits in the Old Gum Tree," in the flute riff of the Grammy-winning band Men At Work's hit single, "Down Under."

That quiz show prompted Larrikin Publishing, who bought the copyright for the now 68-year-old folk song after its composer's death in 1988, to sue for copyright. And yesterday a Sydney judge declared yesterday that the 11-note flute riff did indeed copy from the folk song, and will determine what royalties might be owed by the band.

Despite what some breathless news reports are claiming, damages will likely be limited — as CNN reports, the Larrikin is only claiming a percentage of revenues on Australian sales from the past six years, and the judge has already noted that he has not found that the flute riff is "a substantial part of Down Under or that it is the 'hook' of that song." Still, it's gotten me thinking about how many other copyright land mines might be out there, just waiting for someone (or some thing) to uncover the similarity between some riff and some other previous melody.

Musicians are always borrowing riffs and melodies from previous songs, from little riffs jazz musicians throw in as shout out to other songs to wholesale note-for-note copying. A few well-known examples include The Beach Boys hit "Surfin' USA," a note-for-note copy of Chuck Berry's "Sweet Little Sixteen." (Berry was granted writing credits to the former after a successful lawsuit.) The tune to the 1953 song Istanbul (Not Constantinople) is extremely similar to Irving Berlin's Puttin' On The Ritz. And the chorus to the 1923 hit "Yes! We Have No Bananas" is almost entirely made up of riffs from other songs.

That's just a few examples that have come to people's attention, but how many are out there that borrow from less obvious sources? How many are just waiting for a game show (or a new search engine) to copyright holders to a potential opportunity for some quick royalties? In the past few years it has become possible to search a music database for a recording by playing a snippit of a song or in some cases just by humming a melody. What is not yet possible is to automatically process an audio stream, tease out individual riffs and melody lines, and then find other earlier pieces that contain similar riffs and melody lines. But that kind of research is ongoing, and I have no doubt that it will be solved at some point. When that day comes, we will in essence be able to map out the genome of every music recording ever made, and from that we can lay bare the lineage of every song in history.

When that happens, how many other Kookaburras will we find?

]]>
Dacelo novaeguineae waterworks.jpg
By JJ Harrison (https://www.jjharrison.com.au/) – Own work, CC BY-SA 3.0, Link

A couple years ago, the Australian quiz show “Spicks & Specs” asked its panelists to name the Australian folksong that could be heard in a popular hit single that was first released in 1979. The answer: “Kookaburra Sits in the Old Gum Tree,” in the flute riff of the Grammy-winning band Men At Work‘s hit single, “Down Under.”

That quiz show prompted Larrikin Publishing, who bought the copyright for the now 68-year-old folk song after its composer’s death in 1988, to sue for copyright. And yesterday a Sydney judge declared yesterday that the 11-note flute riff did indeed copy from the folk song, and will determine what royalties might be owed by the band.

Despite what some breathless news reports are claiming, damages will likely be limited — as CNN reports, the Larrikin is only claiming a percentage of revenues on Australian sales from the past six years, and the judge has already noted that he has not found that the flute riff is “a substantial part of Down Under or that it is the ‘hook’ of that song.” Still, it’s gotten me thinking about how many other copyright land mines might be out there, just waiting for someone (or some thing) to uncover the similarity between some riff and some other previous melody.

Musicians are always borrowing riffs and melodies from previous songs, from little riffs jazz musicians throw in as shout out to other songs to wholesale note-for-note copying. A few well-known examples include The Beach Boys hit “Surfin’ USA,” a note-for-note copy of Chuck Berry’s “Sweet Little Sixteen.” (Berry was granted writing credits to the former after a successful lawsuit.) The tune to the 1953 song Istanbul (Not Constantinople) is extremely similar to Irving Berlin’s Puttin’ On The Ritz. And the chorus to the 1923 hit “Yes! We Have No Bananas” is almost entirely made up of riffs from other songs.

That’s just a few examples that have come to people’s attention, but how many are out there that borrow from less obvious sources? How many are just waiting for a game show (or a new search engine) to copyright holders to a potential opportunity for some quick royalties? In the past few years it has become possible to search a music database for a recording by playing a snippit of a song or in some cases just by humming a melody. What is not yet possible is to automatically process an audio stream, tease out individual riffs and melody lines, and then find other earlier pieces that contain similar riffs and melody lines. But that kind of research is ongoing, and I have no doubt that it will be solved at some point. When that day comes, we will in essence be able to map out the genome of every music recording ever made, and from that we can lay bare the lineage of every song in history.

When that happens, how many other Kookaburras will we find?

]]>
Study shows patents actually deter the “progress of science and useful arts” https://www.docbug.com/blog/archives/809 Mon, 06 Jul 2009 20:39:43 +0000 https://www.docbug.com/blog/archives/809 The right to enforce patents is one of the powers specifically spelled out in the U.S. Constitution, which states that Congress shall have the power

To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries;

A new study published in The Columbia Science and Technology Law Review suggests that, in fact, patents deter innovation. The authors (one of whom, coincidentally, was my old roommate in grad school) created a patent simulation game that allows players to "invent" new products by arranging a sequence of widgets. These products can be sold to consumers, and the value of a sequence in the marketplace is related to its subsequence, so it makes sense for players to try to build off of particularly valuable sub-sequences.

Once a player has invented a previously undiscovered sequence, he may choose to open source the discovery or to pay a fee and patent it. Open sourcing a sequence simply prevents anyone from patenting any sequence based on it, while patenting a sequence allows the patent holder to license the sequence to other players and to sue anyone who infringes on the patent. If a patent holder decides to enforce his patent against an infringer, both players decide how many lawyers they wish to hire (again for a fee), and the case is decided by (virtual) die roll. Patent holders may also sell a patent outright to another player.

The researchers ran subjects in either a pure-patent version of the game that did not allow open source, a mixed version that allowed both patent and open source options, and a pure-commons version where patents were not allowed at all. Players were recruited from the incoming law school class, and were told that the player with the most money at the end of a trial would be given a prize. Their results show that players in the pure commons version produced more innovation (number of inventions), more productivity (number of inventions made) and higher social utility (amount of money each player ended with) than either of the other two variations. (The amount of innovations was not statistically significant, the other two metrics were very significant). Interestingly enough, they found no significant difference between the pure patent system and the mixed system for any of the three metrics.

It's easy to nit-pick these kinds of simulation-based experiments, both in terms of how parameters are set and more generally whether the simulation captures enough of the real-world dynamics to be useful. One nit I have is that (near as I can tell) the market value of a product is the same regardless of how many competitors are selling the same product, which would eliminate one of the primary purposes of gaining patent protection. I also wonder whether the stated goal of making more money than your fellow players discouraged strategies that help everyone equally (a rising tide raising all ships), and in particular whether it might have discouraged use of the "open source" option in the mixed variation.

That said, it's an interesting study, and in their discussion the authors cite many empirical and theoretical studies in the past few decades that have also brought into question whether patents actually promote innovation in the real world. The authors also suggest the possibility of more studies using their PatentSim game, and possibly even creating an online massive multiplayer version, which would presumably allow players to develop their strategy and experience with the game over longer periods of time.

(Via RiteReadWeb)

]]>
The right to enforce patents is one of the powers specifically spelled out in the U.S. Constitution, which states that Congress shall have the power

To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries;

A new study published in The Columbia Science and Technology Law Review suggests that, in fact, patents deter innovation. The authors (one of whom, coincidentally, was my old roommate in grad school) created a patent simulation game that allows players to “invent” new products by arranging a sequence of widgets. These products can be sold to consumers, and the value of a sequence in the marketplace is related to its subsequence, so it makes sense for players to try to build off of particularly valuable sub-sequences.

Once a player has invented a previously undiscovered sequence, he may choose to open source the discovery or to pay a fee and patent it. Open sourcing a sequence simply prevents anyone from patenting any sequence based on it, while patenting a sequence allows the patent holder to license the sequence to other players and to sue anyone who infringes on the patent. If a patent holder decides to enforce his patent against an infringer, both players decide how many lawyers they wish to hire (again for a fee), and the case is decided by (virtual) die roll. Patent holders may also sell a patent outright to another player.

The researchers ran subjects in either a pure-patent version of the game that did not allow open source, a mixed version that allowed both patent and open source options, and a pure-commons version where patents were not allowed at all. Players were recruited from the incoming law school class, and were told that the player with the most money at the end of a trial would be given a prize. Their results show that players in the pure commons version produced more innovation (number of inventions), more productivity (number of inventions made) and higher social utility (amount of money each player ended with) than either of the other two variations. (The amount of innovations was not statistically significant, the other two metrics were very significant). Interestingly enough, they found no significant difference between the pure patent system and the mixed system for any of the three metrics.

It’s easy to nit-pick these kinds of simulation-based experiments, both in terms of how parameters are set and more generally whether the simulation captures enough of the real-world dynamics to be useful. One nit I have is that (near as I can tell) the market value of a product is the same regardless of how many competitors are selling the same product, which would eliminate one of the primary purposes of gaining patent protection. I also wonder whether the stated goal of making more money than your fellow players discouraged strategies that help everyone equally (a rising tide raising all ships), and in particular whether it might have discouraged use of the “open source” option in the mixed variation.

That said, it’s an interesting study, and in their discussion the authors cite many empirical and theoretical studies in the past few decades that have also brought into question whether patents actually promote innovation in the real world. The authors also suggest the possibility of more studies using their PatentSim game, and possibly even creating an online massive multiplayer version, which would presumably allow players to develop their strategy and experience with the game over longer periods of time.

(Via RiteReadWeb)

]]>
Creative Commons launches tool to reclaim rights https://www.docbug.com/blog/archives/703 Fri, 22 Dec 2006 01:10:54 +0000 https://www.docbug.com/blog/archives/703 From the Creative Commons weblog:

Creative Commons is excited to launch a beta version of its “Returning Authors Rights: Termination of Transfer” tool. The tool has been included in ccLabs — CC’s platform for demoing new tech tools. It’s a beta demo so it doesn’t produce any useable results at this stage. We have launched it to get your feedback.

Briefly, the U.S. Copyright Act gives creators a mechanism by which they can reclaim rights that they sold or licensed away many years ago. Often artists sign away their rights at the start of their careers when they lack sophisticated negotiating experience, access to good legal advice or any knowledge of the true value of their work so they face an unequal bargaining situation. The “termination of transfer” provisions are intended to give artists a way to rebalance the bargain, giving them a “second bite of the apple.” By allowing artists to reclaim their rights, the U.S. Congress hoped that authors could renegotiate old deals or negotiate new deals on stronger footing (and hopefully with greater remuneration too!!). A longer explanation of the purpose of the “termination of transfer” provisions is set out in this FAQ.

Basically their tool is designed to help authors and artists navigate the legal waters and reclaim their copyrights. From Lessig's blog:

Why is this a Creative Commons project? We've seen CC from the start as a tool to help creators manage an insanely complicated copyright system. When we have this running, we'll offer any copyright owner who has reclaimed his or her rights the opportunity to distribute the work under a CC license. But that will be optional. Right now, we're just offering the tool to make it simpler for authors to get what the copyright system was intended to give them.

]]>
From the Creative Commons weblog:

Creative Commons is excited to launch a beta version of its “Returning Authors Rights: Termination of Transfer” tool. The tool has been included in ccLabs — CC’s platform for demoing new tech tools. It’s a beta demo so it doesn’t produce any useable results at this stage. We have launched it to get your feedback.

Briefly, the U.S. Copyright Act gives creators a mechanism by which they can reclaim rights that they sold or licensed away many years ago. Often artists sign away their rights at the start of their careers when they lack sophisticated negotiating experience, access to good legal advice or any knowledge of the true value of their work so they face an unequal bargaining situation. The “termination of transfer” provisions are intended to give artists a way to rebalance the bargain, giving them a “second bite of the apple.” By allowing artists to reclaim their rights, the U.S. Congress hoped that authors could renegotiate old deals or negotiate new deals on stronger footing (and hopefully with greater remuneration too!!). A longer explanation of the purpose of the “termination of transfer” provisions is set out in this FAQ.

Basically their tool is designed to help authors and artists navigate the legal waters and reclaim their copyrights. From Lessig’s blog:

Why is this a Creative Commons project? We’ve seen CC from the start as a tool to help creators manage an insanely complicated copyright system. When we have this running, we’ll offer any copyright owner who has reclaimed his or her rights the opportunity to distribute the work under a CC license. But that will be optional. Right now, we’re just offering the tool to make it simpler for authors to get what the copyright system was intended to give them.

]]>
EFF Patent Busting https://www.docbug.com/blog/archives/682 Thu, 16 Nov 2006 01:51:57 +0000 https://www.docbug.com/blog/archives/682 EFF has a call out for prior art to help bust two broad patents:

The Patent Busting Project fights back against bogus patents by filing requests for reexamination against the worst offenders. We've successfully pushed the Patent and Trademark Office to reexamine patents held by Clear Channel and Test.com, and now we need your help to bust a few more.

A company called NeoMedia has a patent on reading an 'index' (e.g, a bar code) off a product, matching it with information in a database, and then connecting to a remote computer (e.g., a website). In other words, NeoMedia claims to have invented the basic concept of any technology that could, say, scan a product on a supermarket shelf and then connect you to a price-comparison website. To bust this overly broad patent, we need to find prior art that describes a product made before 1995 that might be something like a UPC scanner, but which also connects the user to a remote computer or database. Take a look at the description and please forward it to anyone you know who might have special knowledge in this area. You can submit your tips here.

Also in our sights is a patent on personalized subdomains from Ideaflood. For example, a student named Alice might have personalized URL 'http://alice.university.edu/' that redirects to a personal directory at 'http://www.university.edu/~alice/.' Ideaflood says that it has a patent on a key mechanism that makes this possible. We need prior art that describes such a method being used before 1999, specifically using DNS wildcards, html frames, and virtual hosting. Prior art systems might have existed in foreign ISPs, universities, or other ISPs with web-hosting services. You can submit tips here."

I'll betcha there's prior art in the augmented reality field that reads on the first patent, either from Steve Feiner's group at Columbia or maybe even the stuff we were playing with at the Media Lab. (I'll go rooting around once I meet a different deadline I'm spending my evenings on...)

]]>
EFF has a call out for prior art to help bust two broad patents:

The Patent Busting Project fights back against bogus patents by filing requests for reexamination against the worst offenders. We’ve successfully pushed the Patent and Trademark Office to reexamine patents held by Clear Channel and Test.com, and now we need your help to bust a few more.

A company called NeoMedia has a patent on reading an ‘index’ (e.g, a bar code) off a product, matching it with information in a database, and then connecting to a remote computer (e.g., a website). In other words, NeoMedia claims to have invented the basic concept of any technology that could, say, scan a product on a supermarket shelf and then connect you to a price-comparison website. To bust this overly broad patent, we need to find prior art that describes a product made before 1995 that might be something like a UPC scanner, but which also connects the user to a remote computer or database. Take a look at the description and please forward it to anyone you know who might have special knowledge in this area. You can submit your tips here.

Also in our sights is a patent on personalized subdomains from Ideaflood. For example, a student named Alice might have personalized URL ‘http://alice.university.edu/’ that redirects to a personal directory at ‘http://www.university.edu/~alice/.’ Ideaflood says that it has a patent on a key mechanism that makes this possible. We need prior art that describes such a method being used before 1999, specifically using DNS wildcards, html frames, and virtual hosting. Prior art systems might have existed in foreign ISPs, universities, or other ISPs with web-hosting services. You can submit tips here.

I’ll betcha there’s prior art in the augmented reality field that reads on the first patent, either from Steve Feiner’s group at Columbia or maybe even the stuff we were playing with at the Media Lab. (I’ll go rooting around once I meet a different deadline I’m spending my evenings on…)

]]>
Finally a patent that admits to bogus claims! https://www.docbug.com/blog/archives/678 Thu, 02 Nov 2006 01:42:21 +0000 https://www.docbug.com/blog/archives/678 There are so many bogus claims in patent applications these days it's kind of nice to see an application that comes right out and admits it (via The Volokh Conspiracy):

9. The method of providing user interface displays in an image forming apparatus which is really a bogus claim included amongst real claims, and which should be removed before filing; wherein the claim is included to determine if the inventor actually read the claims and the inventor should instruct the attorneys to remove the claim.
]]>
There are so many bogus claims in patent applications these days it’s kind of nice to see an application that comes right out and admits it (via The Volokh Conspiracy):

9. The method of providing user interface displays in an image forming apparatus which is really a bogus claim included amongst real claims, and which should be removed before filing; wherein the claim is included to determine if the inventor actually read the claims and the inventor should instruct the attorneys to remove the claim.

]]>
patently obvious https://www.docbug.com/blog/archives/673 Wed, 25 Oct 2006 01:07:30 +0000 https://www.docbug.com/blog/archives/673 patently obvious, adj. An idea so blazingly obvious, only the patent office would think it novel enough to patent.

]]>
patently obvious, adj. An idea so blazingly obvious, only the patent office would think it novel enough to patent.

]]>