Ask HN: Is using AI tooling for a PhD literature review dishonest?

I'm a PhD student in structural engineering. My dissertation topic is about using LLM agents in automating FEA calculations on common Ukrainian software that companies use. I'm writing my literature review now and I've vibecoded a personal local dashboard that helps me manage the literature review process.

I use LLM agents to fill up the LaTeX template (to automate formatting, also you can use IDE to view diffs) in github repo. Then I run ChatGPT Pro to collect all relevant papers (and how) to my topic. Then I collect the ones available online, where the PDFs are available. I have a special structure of folders with plain files like markdown and JSON.

The idea of the dashboard is the following: I run the Codex through a web chat to identify the relevant quotes — relevant for my dissertation topic — and how they are relevant, it combines them into a number of claims connected with each quote with a link. And then I review each quote and each claim manually and tick the boxes. There is also a button that runs the verification script, that validates the exact quote IS really in the PDF. This way I can collect real evidence and collect new insights when reading these.

I remember doing this all manually when I was doing my master's degree in the UK. That was a very terrible and tedious experience partially because I've ADHD

So my question is, is it dishonest?

Because I can defend every claim in the review because I built the verification pipeline and reviewed manually each one. I arguably understand the literature better than if I had read it myself manually and highlighted all. But I know that many universities would consider any AI-generated text as academic misconduct.

I really don't quite understand the principle behind this position. Because if you outsource the task of proofreading, nobody would care. When you use Grammarly, the same thing. But if I use an LLM to create text from verified, structured, human-reviewed evidence — it might be considered dishonest.

8 points | by latand6 17 hours ago

11 comments

  • love2read 16 hours ago
    Someone against AI will tell you yes, someone for AI will tell you no. The only thing I can really say is that saying you have ADHD so you should have a reprieve from the normal rules is something that I don't agree with.
    • jimbooonooo 14 hours ago
      I was diagnosed later in life with ADHD and struggled academically, but agree with this completely. Everybody faces difficulties in life, and ADHD doesn't justify constant exceptions. Your workplace will be far less accommodating, and you need to figure out how to adapt.

      Using AI for literature review is a great tool, but I think the onus is on you to to both verify the output, AND disclose usage of said tool. Clearly describing your methodologies is it important skill for writing papers anyways.

      • latand6 7 hours ago
        I’d be happy to disclose and even consider to share how I did it all

        I’ve even drafted the acknowledgment part with brief explanation of how I used AI tools

        The only part I’m concerned about is the stigma around the AI use and that it can be treated as misconduct

  • Acacian 2 hours ago
    The verification pipeline is the most valuable part of your workflow. Most people who use AI for literature reviews skip exactly that step — they trust the output and move on.

    What you're describing is closer to building a testing harness than "using AI to write." You're asserting claims, checking them against source PDFs, and reviewing manually. That's more rigorous than most manual lit reviews where people skim abstracts and cite papers they half-read.

    Document the pipeline as methodology in your dissertation. That turns a potential misconduct question into a contribution.

  • austinjp 14 hours ago
    While your dashboard sounds fancy, this part raises issues:

    > I run ChatGPT Pro to collect all relevant papers

    Any literature review must be reproducible. If you can't say exactly what queries you ran against exactly what databases, you'll get into trouble. Whether or not that's the way things should be is irrelevant: it's the way things are.

    You should ask your supervisor if your approach is okay. If necessary, ask it from a theoretical perspective: "would it be okay if I were to....?" If your supervisor is unavailable then seek advice from their colleagues.

    Since you mention ADHD, you're likely to be strongly motivated by novelty. Don't spend time building a dashboard that you could spend on writing your thesis. If you're not getting support from your university, get it now. It might not help, but it's a signal to the university that you're engaging with the system.

    • latand6 7 hours ago
      Can you really reproduce it though?

      I thought it’s the experiments that have to be able to reproduce, not the literature review

      • austinjp 6 hours ago
        Whether you can or can't in reality is moot, unfortunately. The literature search in biomedical fields should indeed be theoretically reproducible. I don't know about other fields, but it would seem odd to me if a search was not reproducible, that would lead to a very arbitrary literature selection.

        As for the experiments, yes, in experimental fields. But in all (most?) fields, including non-experimental, the whole process should be well documented so it could be reproduced end-to-end if possible. If it's not reproducible there should be good, well explained reasons why not.

        Note that reproduciblity does not necessarily mean the exact same answer will definitely emerge, just that the methods can be followed closely.

        • latand6 5 hours ago
          Got that, thanks for the advice, I'll ask my supervisor how to address that properly
    • BrenBarn 13 hours ago
      > Any literature review must be reproducible.

      That's totally at odds with my understanding, but perhaps this differs between fields.

      • austinjp 6 hours ago
        Quite probably there are differences between fields. In biomedical literature reviews the search terms and databases are detailed, and (in systematic reviews) a PRISMA flowchart [0] provided. The theory being that other researchers could repeat the searches and the in/out decisions and get the same stack of papers to review.

        [0] https://www.prisma-statement.org/prisma-2020-flow-diagram

  • fyredge 15 hours ago
    Yes and no. The first thing to understand is that in academia, knowledge is the work. You are being trained to absorb existing knowledge, hypothesise new knowledge and test if it is valid.

    LLMs are a useful tool if you want it to generate text. But in the context of research, this is quite dangerous. Think of a calculator that spits out the wrong answer 10% of the time, would you trust it to use in an exam? How about 5%? 1%? 0.1%? The business of research is the business of factual knowledge. Every piece of information should and is expected to be scrutinized. That's why dishonesty is severely looked down upon (falsifying data / plagiarism etc.)

    I would say your use case is not dishonest, but I would also like you to think from the perspective of the university. How would they know if their students are using it honestly like you did? How can they, with their limited resource, make sure that research integrity is upheld in the face of automated hallucinations?

    At the end of the day, the question is not what if using AI is dishonest, it's about being able to walk into an antagonistic panel and defend your claim that you understand the knowledge of your field (without live AI help). If you can do that and also make sure that the contents are not hallucinated, then I don't see why not.

    • latand6 7 hours ago
      Yeah that’s exactly my point. The AI is just taking the boring job of collecting evidence and I’m a validator. This way i see that I’m able to process papers much faster than without AI. It’s faster primarily because you don’t have to spend 70% of your time reading abstracts and sections of the papers you’ll never need. Doing manually it’s very exhausting.

      Thats being said, I feel like I’m feeling more productive it terms of generating insights apart from what the AI said. I also have a chat interface where I basically can ask anything I want from the PDF (and yeah I’m aware of the NotebookLM, I just don’t trust Gemini)

  • matzalazar 6 hours ago
    Think about it this way: 70 years ago, would a physicist be considered a cheater for using a calculator to solve complex differential equations in their daily work? People tend to frame the moral dilemmas of new technology through the lens of everyday human tasks, and I think that's just a prejudice.
  • malshe 13 hours ago
    I don't think what you are doing is dishonest. But my opinion hardly matters.

    My advice is to talk to your dissertation committee chair to understand whether they think it is dishonest. Furthermore, read your university's AI usage policies. If they don't consider what you are doing a permissible use of AI, no amount of assurance on HN or any online forum is gonna help you.

    • latand6 7 hours ago
      I agree with you and that’s exactly what I’m going to do. It’s just that I may be more persuasive if I’m prepared
  • Neosmith_amit 14 hours ago
    No, I don't think it is dishonest.

    At the same time I would recommend, document your methodology explicitly in the dissertation, describe the verification pipeline, and make it clear what you reviewed manually versus what was automated. That transparency converts "dishonest?" into "methodologically rigorous."

    Here is the thing, academic policy is NOT really about honesty. It is about trust. Universities cannot distinguish your workflow from someone who prompted GPT to write their lit review wholesale.

    More than the ethical distinction, I believe the rule around AI usage is blunt because enforcement is pretty hard.

  • QubridAI 15 hours ago
    Not dishonest if you verify everything and understand it deeply but you should be transparent about your AI use since many universities care more about disclosure than the method itself.
  • bjourne 14 hours ago
    You cannot copy others' work and claim it is your own. Thus, you cannot copy ChatGPT's work and claim it is your own. There is a qualitative difference between having an LLM generate text and having a program spell- and grammar check text. Since you are not going to highlight which passages in your article ChatGPT wrote for you and instead intend to pass it of as your own creative work it is dishonest. Very dishonest. If caught you will get in trouble and may be kicked out of your academic programme.
    • latand6 7 hours ago
      There is not a single paragraph that I might “steal” from ChatGPT. I’m consistently using multiple LLMs to write, polish, rephrase and all other kinds of edits

      I really don’t get the point of the necessity of typing manually. Can you explain?

  • adampunk 15 hours ago
    I don’t know if it is dishonest. What I do know is that it will only save you time if you have a very specific and testable need. Otherwise it will appear to save time and produce something that you won’t be proud of.
  • freelancedata 14 hours ago
    [flagged]