Regrowing permanent teeth and How biased is your AI detector

A little bit of molecular biology and some things to know as we transition to a world with AI

How many sets of teeth do humans have?

One does not need to be a science student to know the answer to this. We know very well that the milk teeth that babies develop will eventually fall and be replaced by permanent teeth.

A new option of regrowing teeth could soon be available in dental treatments

What researchers have found is that the human jaw also has buds for a third set of teeth. No, this is not hypodontia - the condition where some individuals develop few extra teeth, on top of their permanent teeth. There is actually another layer of tooth buds, which is not activated to grow into a complete set.

The opposite is also true. In some cases, individuals do not develop their first set of teeth properly. This usually occurs in children aged between years two and six and affects their ability to chew their food, swallow and even speak.

Katsu Takahashi and his team of researchers at Kyoto University found that mice lacking a gene grew a larger than-normal set of teeth. They found that when a protein called USAG-1 was responsible for setting a limit on the number of sets seen in the species.

Blocking the protein could allow more teeth to be grown and the team demonstrated this in mice. The paper was published two years ago but is now making its way to clinical trials in humans next year.

The bias of AI detectors

The meteoric rise of artificial intelligence bots in recent times has also led to a dramatic increase in suspicion levels among people. When a student submits an essay or a freelance writer his assignment, one is quick to assume that the content might be written with the help of AI. More so, if the individual supersedes expectations.

This has also led to mushrooming of an AI-detection tools industry that claims of being able to detect bot-generated content with accuracy as high as 99 percent. If the content is flagged by the tools as bot-generated, one will accept it at face value.

But Stanford researcher James Zou refused to accept the scores as they were. Instead, he ran 91 essays written by students who were non-native English speakers through seven different AI detection tools.

These essays were written for the English proficiency test, TOEFL but more than half of them were flagged as bot-generated by these programs. One popular even recognized 98 percent of the submitted essays as written by AI.

Zou and his team then ran essays written by eighth graders through these tools. These were students for whom English was their native language and the tools cleared 90 percent of the work as human-generated content.

So what is happening here?

Turns out "text perplexity" is the difference.

It is a measure of how confused a large language model is when it comes across a piece of content and whether it can predict the next word in the sentence.

Tools like ChatGPT churn out the text of low perplexity, while native English speakers write with high perplexity (in comparison to the bot).

Non-native English speakers though might use simpler words in their write-ups, decreasing the perplexity of texts, and making them seem to be likely generated by a bot.

But this can have serious consequences.

Could you have guessed?

For those who are still on the fence about how good AI models have become in the recent past, this is something you must read.

A gaming company set up a challenge for users to detect whether they were conversing with a bot or a human over the internet.

Users got two minutes to have a conversation about absolutely anything and receive responses from the game and then had to pick if the conversation they had was with a human or not.

A replication of the famous Turing Test but set up in 2023, Human Or Not was an interesting experiment, while it was being run. For now, users can't try this out for themselves. But there are some interesting results that are detailed in this post.

Do give it a read.

If you enjoyed this edition of the newsletter, do consider sharing it so others in your circle also benefit.

Plus, it can make for a great conversation with a colleague, the next time you are relaxing 'Over a Cup of Coffee'.

If this was forwarded to you, you can sign up for this newsletter yourself.

Thanks for reading.

Ameya