Using Chat GPT?

Ross Woods, 2023, '24 with thanks to Sαbα Yαsmιn, Dαwιε ναn Vυυrεn, and Τοm Grαnοff
Major revision June-July 2024. Revised Oct. '24

When can students use Chat GPT? Its benefits are generally quite ethical, but it is easy, dangerous, and unethical for students to try to get it to write essays and theses. We are now seeing new software that can detect text that was written by an AI chatbot. For any kind of educational application, the current best practice is to get permission from your supervisor or instructor before using it.

Outside academia, it is helpul for writing rough drafts, although several iterations might be necessary.

Benefits of Chat GPT

  1. If you are writing a paper or thesis and have an area of interest, ask it for a list of research topics on that area of interest. Those topics might not be exactly what you need, but they could be very helpful.
  2. Chat GPT can find relevant sources for a literature review, and perhaps summarize each article. It can also find data sets.
  3. Chat GPT can break complicated topics into text that is easier to read.
  4. Ask Chat GPT for an outline of your paper, perhaps expressed as chapters and sections, or as sections and arguments pro and contra.
  5. Use Chat GPT to proofread your spelling and grammar.
  6. Use Chat GPT to look for better words.
  7. Use Chat GPT to express your work in a particular style, such as a particular kind of document or the style of a prominent author or for a particular kind of readership. It can also re-express given information from another perspective. This is helpful for several reasons:
    1. Its default language style is quite bland.
    2. It might adjust your vocabulary choice.
    3. It might also improve the readablity level, which largely relates to sentence length.
  8. It can also translate documents.
  9. It can write or check computer code in many different programming languages.

Limitations

  1. Chat GPT is not a subject matter specialist so can be very inaccurate in details.
  2. Chat GPT currently only collates and re-expresses existing information to answer your question or follow your instructions, and express it coherently in written language. This has various implications:
    1. It cannot work beyond the most recent information it was trained on.
    2. It cannot do original research because it cannot actually analyze, identify assumptions, think critically, or explore ramifications.
    3. Anything it says could be plagiarized.
    4. Sometimes it creates fictional references by “collating” references, not just information.
  3. It doesn’t always answer questions very well:
    1. Sometimes its answers don’t make any sense.
    2. Sometimes it can’t answer the question at all.
    3. Sometimes it answers the question, but the answer is wrong.

Where are the boundaries for academic work?

The Australian Standard of Editing Practice (ASEP) sets explicit limits to the editor's role, and it is notable that software and proofreaders operate within these guidelines:

  1. The editor's role is limited to language, expression, referencing, and academic style. They may help in phrasing ideas more clearly, resolving inconsistencies, fixing confusing paragraphs, and helping make argument more persuasive.
  2. They may not write the student’s dissertation nor make changes to structure and content.

The implications are as follows:

  1. Any usage of AI within the ASEP bounds of editing is permissible, although supervisors are entitled to know when it is used.
  2. In dissertation and thesis programs, supervisors must be informed beforehand when it is to be used. This may include a mention in the proposal.
  3. For any usage beyond ASEP bounds, researchers should give references to the AI program, and describe the way it was used, just the same as any other software used to analyze data.

What now? And what next?

Some AI programs can already produce first drafts of literature reviews (see below), and can already sort data into an intelligible outline, which is a rudimentary form of analysis. AI could also write annotated bibliographies.

The direction I believe AI should take for research is to become a more narrowly focussed application that is trained on a narrower range of source information (eg. reputable journal articles, monographs, and dissertations). It also needs to be specifically trained to follow some rules of academia. In particular, it needs to be able to refrain from plagiarism, and to write accurate citations, references, and bibliographies.

The irony is that every time someone points out a weakness in AI, it serves as a way for AI to overcome those weaknesses. Perhaps AI applications will become cleverer but more task-specific. This will reduce the number of mistakes and ethical-legal problems.

Could AI be applied in research?

Computers have long been used in research. The author simply cites the particlar program and reports the detailed procedure.

AI is useful in research in various ways, although perhaps not specifically Chat GPT. For example:

  1. AI can identify patterns in data that are imperceptible to humans.
  2. AI can process visual images and sounds.
  3. AI can probably also convert pseudocode into working computer programs.
  4. It is fairly easy to foresee that AI could also write methodology plans as long as the development was iterative. That is, give it a task, get the AI statement, then ask the program to improve it until it was correct.

Could AI play an even more substantial role in research? Perhaps, but it it would in many iterations. For example, you have a particular kind of complex dataset, and need an algorithm to solve the problem:

Literature reviews

AI writing aids can now write literature reviews. For example, Hyperwrite searches for relevant scholarly sources, summarizes their main points, methodologies, and findings, and organizes these summaries into a well-structured literature review. The result is a comprehensive, academically styled review with proper citations …

An AI-written literature review is helpful as a search strategy and as a way to create a possible outline.

However, AI-written literature reviews will be probably disallowed in academic research for the following reasons. First, AI writes the text, so it is the same as any other chatbot text; the researcher cannot demonstrate accountabilty for what is written. Even worse, the researcher has not actually read the articles, so he/she cannot ensure that the text is correct or even that he/she understands it. Second, AI cannot think critically, it can only collate existing works. In other words, the AI-written literature review is at best a collation, and cannot meet the requirements of a critical review.

The next stage of literature review AI is to explore their potential for writing textbooks.

Data analysis

Intellectus Qualitative does thematic coding for the analysis of qualitative data.

Other AI applications

Llama: https://llama.meta.com/
Specialist essay writer: https://www.perfectessaywriter.ai/
ZeroGPT claims to be reliable at detecting text written by AI: https://www.zerogpt.com/
Spinnerchief claims to be reliable at evading programs that detect text written by AI: https://blog.spinnerchief.com/.

Many other AI applications are now open source and (apparently) free:
TensorFlow (Review)
IBM Watson (Review)
Apache Mahout (Review)
OpenNN (Review)
Scikit-learn (Review)
Accord.NET (Review)
Torch (Review)
With thanks to Goodfirms