Large Language Models (LLMs) like ChatGPT train using multiple sources of information, including web content. This data forms the basis of summaries of that content in the form of articles that are produced without attribution or benefit to those who published the original content used for training ChatGPT.
Search engines download website content (called crawling and indexing) to provide answers in the form of links to the websites.
Website publishers have the ability to opt-out of having their content crawled and indexed by search engines through the Robots Exclusion Protocol, commonly referred to as Robots.txt.
The Robots Exclusions Protocol is not an official Internet standard but it’s one that legitimate web crawlers obey.
Should web publishers be able to use the Robots.txt protocol to prevent large language models from using their website content?
Large Language Models Use Website Content Without Attribution
Some who are involved with search marketing are uncomfortable with how website data is used to train machines without giving anything back, like an acknowledgement or traffic.
Hans Petter Blindheim (LinkedIn profile), Senior Expert at Curamando shared his opinions with me.
“When an author writes something after having learned something from an article on your site, they will more often than not link to your original work because it offers credibility and as a professional courtesy.
It’s called a citation.
But the scale at which ChatGPT assimilates content and does not grant anything back differentiates it from both Google and people.
A website is generally created with a business directive in mind.
Google helps people find the content, providing traffic, which has a mutual benefit to it.
But it’s not like large language models asked your permission to use your content, they just use it in a broader sense than what was expected when your content was published.
And if the AI language models do not offer value in return – why should publishers allow them to crawl and use the content?
Does their use of your content meet the standards of fair use?
When ChatGPT and Google’s own ML/AI models trains on your content without permission, spins what it learns there and uses that while keeping people away from your websites – shouldn’t the industry and also lawmakers try to take back control over the Internet by forcing them to transition to an “opt-in” model?”
The concerns that Hans expresses are reasonable.
In light of how fast technology is evolving, should laws concerning fair use be reconsidered and updated?
I asked John Rizvi, a Registered Patent Attorney (LinkedIn profile) who is board certified in Intellectual Property Law, if Internet copyright laws are outdated.
“Yes, without a doubt.
One major bone of contention in cases like this is the fact that the law inevitably evolves far more slowly than technology does.
In the 1800s, this maybe didn’t matter so much because advances were relatively slow and so legal machinery was more or less tooled to match.
Today, however, runaway technological advances have far outstripped the ability of the law to keep up.
There are simply too many advances and too many moving parts for the law to keep up.
As it is currently constituted and administered, largely by people who are hardly experts in the areas of technology we’re discussing here, the law is poorly equipped or structured to keep pace with technology…and we must consider that this isn’t an entirely bad thing.
So, in one regard, yes, Intellectual Property law does need to evolve if it even purports, let alone hopes, to keep pace with technological advances.
The primary problem is striking a balance between keeping up with the ways various forms of tech can be used while holding back from blatant overreach or outright censorship for political gain cloaked in benevolent intentions.
The law also has to take care not to legislate against possible uses of tech so broadly as to strangle any potential benefit that may derive from them.
You could easily run afoul of the First Amendment and any number of settled cases that circumscribe how, why, and to what degree intellectual property can be used and by whom.
And attempting to envision every conceivable usage of technology years or decades before the framework exists to make it viable or even possible would be an exceedingly dangerous fool’s errand.
In situations like this, the law really cannot help but be reactive to how technology is used…not necessarily how it was intended.
That’s not likely to change anytime soon, unless we hit a massive and unanticipated tech plateau that allows the law time to catch up to current events.”
So it appears that the issue of copyright laws has many considerations to balance when it comes to how AI is trained, there is no simple answer.
OpenAI and Microsoft Sued
An interesting case that was recently filed is one in which OpenAI and Microsoft used open source code to create their CoPilot product.
The problem with using open source code is that the Creative Commons license requires attribution.
According to an article published in a scholarly journal:
“Plaintiffs allege that OpenAI and GitHub assembled and distributed a commercial product called Copilot to create generative code using publicly accessible code originally made available under various “open source”-style licenses, many of which include an attribution requirement.
As GitHub states, ‘…[t]rained on billions of lines of code, GitHub Copilot turns natural language prompts into coding suggestions across dozens of languages.’
The resulting product allegedly omitted any credit to the original creators.”
The author of that article, who is a legal expert on the subject of copyrights, wrote that many view open source Creative Commons licenses as a “free-for-all.”
Some may also consider the phrase free-for-all a fair description of the datasets comprised of Internet content are scraped and used to generate AI products like ChatGPT.
Background on LLMs and Datasets
Large language models train on multiple data sets of content. Datasets can consist of emails, books, government data, Wikipedia articles, and even datasets created of websites linked from posts on Reddit that have at least three upvotes.
Many of the datasets related to the content of the Internet have their origins in the crawl created by a non-profit organization called Common Crawl.
Their dataset, the Common Crawl dataset, is available free for download and use.
The Common Crawl dataset is the starting point for many other datasets that created from it.
For example, GPT-3 used a filtered version of Common Crawl (Language Models are Few-Shot Learners PDF).
This is how GPT-3 researchers used the website data contained within the Common Crawl dataset:
“Datasets for language models have rapidly expanded, culminating in the Common Crawl dataset… constituting nearly a trillion words.
This size of dataset is sufficient to train our largest models without ever updating on the same sequence twice.
However, we have found that unfiltered or lightly filtered versions of Common Crawl tend to have lower quality than more curated datasets.
Therefore, we took 3 steps to improve the average quality of our datasets:
(1) we downloaded and filtered a version of CommonCrawl based on similarity to a range of high-quality reference corpora,
(2) we performed fuzzy deduplication at the document level, within and across datasets, to prevent redundancy and preserve the integrity of our held-out validation set as an accurate measure of overfitting, and
(3) we also added known high-quality reference corpora to the training mix to augment CommonCrawl and increase its diversity.”
Google’s C4 dataset (Colossal, Cleaned Crawl Corpus), which was used to create the Text-to-Text Transfer Transformer (T5), has its roots in the Common Crawl dataset, too.
Their research paper (Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer PDF) explains:
“Before presenting the results from our large-scale empirical study, we review the necessary background topics required to understand our results, including the Transformer model architecture and the downstream tasks we evaluate on.
We also introduce our approach for treating every problem as a text-to-text task and describe our “Colossal Clean Crawled Corpus” (C4), the Common Crawl-based data set we created as a source of unlabeled text data.
We refer to our model and framework as the ‘Text-to-Text Transfer Transformer’ (T5).”
Google published an article on their AI blog that further explains how Common Crawl data (which contains content scraped from the Internet) was used to create C4.
“An important ingredient for transfer learning is the unlabeled dataset used for pre-training.
To accurately measure the effect of scaling up the amount of pre-training, one needs a dataset that is not only high quality and diverse, but also massive.
Existing pre-training datasets don’t meet all three of these criteria — for example, text from Wikipedia is high quality, but uniform in style and relatively small for our purposes, while the Common Crawl web scrapes are enormous and highly diverse, but fairly low quality.
To satisfy these requirements, we developed the Colossal Clean Crawled Corpus (C4), a cleaned version of Common Crawl that is two orders of magnitude larger than Wikipedia.
Our cleaning process involved deduplication, discarding incomplete sentences, and removing offensive or noisy content.
This filtering led to better results on downstream tasks, while the additional size allowed the model size to increase without overfitting during pre-training.”
Google, OpenAI, even Oracle’s Open Data are using Internet content, your content, to create datasets that are then used to create AI applications like ChatGPT.
Common Crawl Can Be Blocked
It is possible to block Common Crawl and subsequently opt-out of all the datasets that are based on Common Crawl.
But if the site has already been crawled then the website data is already in datasets. There is no way to remove your content from the Common Crawl dataset and any of the other derivative datasets like C4 and .
Using the Robots.txt protocol will only block future crawls by Common Crawl, it won’t stop researchers from using content already in the dataset.
How to Block Common Crawl From Your Data
Blocking Common Crawl is possible through the use of the Robots.txt protocol, within the above discussed limitations.
The Common Crawl bot is called, CCBot.
It is identified using the most up to date CCBot User-Agent string: CCBot/2.0
Blocking CCBot with Robots.txt is accomplished the same as with any other bot.
Here is the code for blocking CCBot with Robots.txt.
User-agent: CCBot Disallow: /
CCBot crawls from Amazon AWS IP addresses.
CCBot also follows the nofollow Robots meta tag:
What If You’re Not Blocking Common Crawl?
Web content can be downloaded without permission, which is how browsers work, they download content.
Google or anybody else does not need permission to download and use content that is published publicly.
Website Publishers Have Limited Options
The consideration of whether it is ethical to train AI on web content doesn’t seem to be a part of any conversation about the ethics of how AI technology is developed.
It seems to be taken for granted that Internet content can be downloaded, summarized and transformed into a product called ChatGPT.
Does that seem fair? The answer is complicated.
Featured image by Shutterstock/Krakenimages.com
Leave a Reply