Large language models (LLMs) are more likely to criminalise users that use African American English, the results of a new Cornell University study show.