Since AI has gained some semblance to the human essay writer’s mind, it’s been trying hard to overtake many industries. Technological wonders perform many tasks, from assembling gadgets to performing medical screenings where human doctors can’t look without tiny cameras. But one of the biggest challenges that robots haven’t yet completed is writing like human essay writers. In this context, ChatGPT is a new contender entering the digital arena.
What is ChatGPT?
The GPT-3 (Generative Pre-trained Transformer 3) neural network is one of the latest developments designed to create various texts, from essays and articles to poetry and dialogue. All you need to do is give it a little input, like instructions for a simple English 101 essay, and it will write a presumably perfect paper. We did just that to check how good this AI is, and then ordered the same paper at CustomWritings.com. Our next step was to give both texts to the anonymous professor to check.
In general, ChatGPT has caused quite a fuss in universities. Professors are currently rethinking the ways in which they teach, trying to shift from typed responses to handwritten ones and even including AI into class discussions. Reporters were also bothered while professors were trying to tell their texts from ChatGPT. Still, is the academic future really going to be overtaken by alien robots? Let’s find out through a practical study.
How to use chatGPT?
Using this chatbot is very simple: you just need to type your instructions in the corresponding field and watch as the AI creates the text you want. It’s important to start your request with “write me a…” and then enter further instructions. Another detail is that you have to be registered as OpenAI to gain access to all the features of this digital writer. That way, you can adjust the text parameters such as temperature and maximum length and also the learning model (like text-davinci-003). This works fine with the simplest requests, such as “write a lyrical poem about love.” Still, the real question is, can chatGPT cope well with academic demands that are often very complex?
Here are the instructions we gave to the openAI chat and to the human essay writer as well. “Write a supporting statement for the exchange program. Reasons you want to attend this program and your interests. I’m interested in business programs and I like to play football. This is a 2-week program in the UK in July 2022. Write me at least 600 words.” This task seems pretty easy for both the openAI creator and one of the CustomWritings experts.
We’ve assessed the two papers using the base grading system from the anonymous professor and the digital analysis that measures accuracy and readability of each text. So, we’re in for a battle: chat GPT vs human essay writer. What will the results show? Keep reading to find out.
Essay structure: can ChatGPT write an essay?
Almost every line by ChatGPT started with “I.” That already made the text sound quite mechanical, but there were also sentences that made no sense in the middle of passages. For instance, the phrases like “I took business classes in school and now I study business at university” are everywhere in the text while substantial sentences are rare. In contrast, the human writer was very specific, connecting each interest to the main goal, which is participating in the exchange program.
Thus, structure is one of ChatGPT’s weakest points. On a scale from 1 to 10, with 10 being the best result, the AI has only scored 2 points against 8 points by the human writer. This wasn’t a big surprise, considering that almost every single sentence in any academic paper must correspond to a set of criteria. Every paragraph needs to open with a topic sentence and follow a required logical pattern while the introduction and conclusion are even more complicated. If you’re a student, you probably remember poring over books for hours to craft a thesis statement that suits your instructor’s demands.
What professor said:
“The human essay writer got pretty close to what I consider ideal structure. There were small inaccuracies such as the paragraphs that were slightly different in size, but the essay had a good flow and was easy to read. It was as if it came from one of my best students. As for the text by GPT 3, it looked very “chopped,” with tiny parts that followed huge passages.
Needless to say that the machine couldn’t complete the stylistic demands we had, and there was no way to tell it how to fix the essay. In contrast, we could send any demands at all to the human essay writer when we were ordering the paper from them. Perhaps robots’ inability to communicate outside of their basic algorithms is their worst problem so far.
Logical organization of ideas
The real person’s essay went from one point to another. Converted into a short scheme, it would look like this. “I’m interested in business, specifically in learning how the UK curriculum compares to my own. I also enjoy football, so I believe that I will be able to support casual conversations and find new acquaintances in this country. That’s why I need the exchange program.” The same scheme from ChatGPT would be much less sensible. Here it is. “I’ve always loved business because my father is a businessman. The UK football culture is great. The UK culture is great. This exchange program will be unforgettable for me.”
ChatGPT did a slightly better job than before, scoring 7 out of 10 against the human writer who got 9 out of 10. That’s pretty impressive for an AI that has to compete with a trained human being. Still, there were several catches that the professor had sorted into three groups for our convenience. Here’s what instructor said:
- Firstly, ChatGPT had logic but no coherent organization. It’s clear that the same weak argument can’t work even if it’s repeated several times. This is what I’ve seen with this open AI writer, and while I had to admit that it created something better than gibberish, it was still far from what I’d call a decent paper.
- Secondly, the human essay writer cited sources while GPT-3 didn’t. There were no in-text citations or bibliography made by this open AI content writer. That’s why I had a hard time deciphering what was a quote and what wasn’t. Every learner knows that it’s an unacceptable approach to academic work. In contrast, the human essay writer’s paper had a sound formal style and no plagiarism at all.
- Third, Chat GPT presented speculations instead of research. What I’ve learned from this text is often far from the topic, like the details about the writer’s father. It’s like the chatGPT essay generator took snippets of conversations about interest in business and crammed them together. It came extremely close to imitating human thought but not mimicking people’s writing.
Who followed instructions better and why?
The AI wrote: “I have done some research practices in the UK, which has further piqued my interest in this exchange program.” This is a precise explanation of some background that could really help a student get in. The human writer clearly focused more on what you can get from the exchange than on the current achievements. They wrote: “This exchange program can become a great opportunity for me to understand what we can improve in our extracurricular activities to become more familiar with other countries’ business culture.”
People can be inattentive and sometimes fail to complete the simplest requirements while nailing the more complex ones. For example, any student remembers forgetting to mark the date on top of their essay or failing to change the working title. The openAI writing bot finally had the upper hand, scoring 8.5/10 against the human essay writer’s result of 8/10. As far as data and measurability went, GPT 3 appeared to be better.
What professor said:
The AI essay writer has mentioned everything I required, including the personal achievements, while the CustomWritings expert focused more on the future gains. However, the problem was that the AI mentioned everything but the essay turned out extremely superficial. In short, It’s an A for diligence but an F for depth.” Good structure could have fixed the digitally generated paper a little, but there was none, unfortunately.
Proficiency in the English language
The GPT chat wrote: “I am an avid sports lover and I have a particular love for football.” This is grammatically correct, but both sentence structure and vocabulary are repetitive, making the text read as a robotic report. In contrast, the CustomWritings expert formulated a similar thought like this: “My love for sports manifests itself in my devotion to football.” The phrasing might be a bit awkward, but the sentence overall is much more humane and “warm.”
This is one criterion where the open AI writing and the human-made text were eerily close, both scoring about 9 points. If proficiency In English was the only thing you need to write a perfect essay, the bot would be on approximately the same level as a human being.
Points from professor:
- ChatGPT only formed correct sentences. To give credit where it’s due, this chat bot is clearly well-trained to follow all grammatical patterns. The person from CustomWritings who wrote the other paper clearly had a decent competitor when it came to avoiding passive voice and writing with clarity. Still, lack of logical structure made ChatGPT harder to understand.
- The human essay writer made several small mistakes. There were a few things that the CustomWritings expert could improve about their grammar, especially formulations. Despite this fact, the clarity level was high, and I wouldn’t subtract more than 0.5 point because of that.
- No logic is worse than a few grammatical errors. I prefer an essay with a few grammatical mistakes but a decent structure to the one that’s poorly organized. We’re all humans after all, so it’s logical thinking that counts more.
Can ChatGPT write essays?
According to the final results, GPT-3 is approximately 25% worse than a human essay writer when it comes to academic essays. Still, these are only numbers that hide this bot’s incapability to create the texts that follow academic logic. This is the worst part about AI writing because you’d need to “rebuild” the paper you’ve got from such a bot almost from scratch to get something your instructor could accept. So, can chatgGPT make essay sound better? Unfortunately, no.
Other problems with ChatGPT include the absence of in-text citations and bibliography along with lack of original ideas instead of recycled ones. We were hoping to find a few interesting thoughts while trying out this bot, but that never happened. So, we can’t call it a generator of ideas. The CustomWritings expert, on the other hand, has pleasantly surprised both us and the professor we’ve asked to look at their essay. What this essay writer has generated is a decently written paper that could compete with some of the most creative works that instructor has ever seen.
What the results show
So, are chatGPT better than humans? The best answer is probably “not really.” While AI comes pretty close to people in terms of following instructions and correct grammar, it can’t create sound academic structure and keep to the proper style. Moreover, you can’t expect correct bibliography and in-text citations when you’re working with this chat bot. All these issues make it quite inconvenient for the students who need to generate a sample text. As the anonymous professor who checked the bot’s work put it, “you’d spend more time editing this draft than you would writing an essay from scratch.”
As for the CustomWritings experts, they did a much more balanced job. Not everything was ideal about the essay we’ve received from this service, but it still got a much higher score. On top of that, we needed to clarify some of the instructions with the writer as they were completing the order. We had several helpful conversations. Which, of course, would be impossible with an AI. In short, human essay writers have once again beaten digital bots in terms of quality. Everything might change in a couple of years, but for now, you can only trust another person to create a complex academic text of a decent level.
What are the best AI essay writers and are they good?
Despite the downsides of using AI for academic papers that our research has shown, quite a few websites tailored specifically to creating essays using text generators have sprung up recently. Let’s look at them to see if they’re worth your while.
- EssayGenius.ai is free and demands no signing up. This one generates sample essays that have the same sketchy structure and generic information we’ve pointed out as the flaws of AI writing before. Unfortunately, it’s no more academic than a random text generator.
- The-Good-AI.com offers essay writing and essay outlining. This one requires registration, and isn’t much different from an extensive Google search. You could save some time on typing each query individually but it still won’t give you a quality example to work off.
- Jenni.ai is by far the most useful one since it can find excerpts from scientific sources to use in your papers. Once again, this tool is cool for saving time on googling, but no scholar can get a cohesive paper out of this AI (which is fine because it’s not its goal). Still, the question is, how to structure, paraphrase and combine those materials? No AI will give you a custom example as an answer.
The plagiarism checkers have already learned to recognize the AI work as plagiarism, so using such tools can yield quite grievous results for many students. On the other hand, human-written assignments you can use as examples are becoming more and more valuable. After all, the AI won’t follow your instructions, you only get one prompt. Real people, on the other hand, can professionally craft a paper with any customizations you can request. That’s why AI isn’t taking over the jobs of online academic writers anytime soon.