Using Chat GPT?

Ross Woods, 2023 with thanks to Sαbα Yαsmιn, Dαwιε ναn Vυυrεn, and Τοm Grαnοff

When can students use Chat GPT? Its benefits are generally quite ethical, but it is easy, dangerous, and unethical for students to try to get it to write essays and theses. We are now seeing new software that can detect text that was written by an AI chatbot. For any kind of educational application, the current best practice is to get permission from your supervisor or instructor before using it.

Outside academia, it is helpul for writing rough drafts, although several iterations might be necessary.

Benefits of Chat GPT

  1. If you are writing a paper or thesis and have an area of interest, ask it for a list of research topics on that area of interest. Those topics might not be exactly what you need, but they could be very helpful.
  2. Chat GPT can find relevant sources for a literature review, and perhaps summarise each article. It can also find data sets.
  3. Chat GPT can break complicated topics into text that is easier to read.
  4. Ask Chat GPT for an outline of your paper, perhaps expressed as chapters and sections, or as sections and arguments pro and contra.
  5. Use Chat GPT to proofread your spelling and grammar.
  6. Use Chat GPT to look for better words.
  7. Use Chat GPT to express your work in a particular style, such as a particular kind of document or the style of a prominent author or for a particular kind of readership. It can also re-express given information from another perspective. This is helpful for several reasons:
    1. Its default language style is quite bland.
    2. It might adjust your vocabulary choice.
    3. It might also improve the readablity level, which largely relates to sentence length.
  8. It can also translate documents.
  9. It can write or check computer code in many different programming languages.

Limitations

  1. Chat GPT is not a subject matter specialist so can be very inaccurate in details.
  2. Chat GPT currently only collates and re-expresses existing information to answer your question or follow your instructions, and express it coherently in written language. This has various implications:
    1. It cannot work beyond the most recent information it was trained on.
    2. It cannot do original research because it cannot actually analyze, identify assumptions, think critically, or explore ramifications.
    3. Anything it says could be plagiarized.
    4. Sometimes it creates fictional references by “collating” references, not just information.
  3. It doesn’t always answer questions very well:
    1. Sometimes its answers don’t make any sense.
    2. Sometimes it can’t answer the question at all.
    3. Sometimes it answers your question, but the answer is wrong.

What next?

The direction I believe AI should take for research is to become a more narrowly focussed application that is trained on a narrower range of source information (eg. reputable journal articles, monographs, and dissertations). it also needs to be specifically trained to follow some rules of academia. In particular, it needs to be able to refrain from plagiarism, and to write accurate citations, references, and bibliographies. Consequently, AI could write annotated bibliographies, first drafts of literature reviews and, with iterations, methodology statements. It can probably already sort data into an intelligible outline, which is a rudimentary form of analyis. In any casse, you will still need to edit it.

The irony is that every time someone points out a weakness in AI, it serves as a way for AI to overcome those weaknesses. Perhaps AI applications will become cleverer but more task-specific. This will reduce the number of mistakes and ethical-legal problems.

Could AI be applied in research? Probably, but you'd need a good problem to solve to make a project out of it, and you'd need to be willing to use many iterations. For example, you have a particular kind of complex dataset, and need an algorithm to solve the problem: