Last week, I presented this paper on cognitive offloading in children. The discussion moved to AI, and how it might be different from other ways our species has offloaded until recently. Prior to AI, the dominant form of offloading is writing. We manipulate and structure the environment in many ways to aid cognition, but when we think of "offloading" information, most of us think of how we write things down to remember later. A bit worse is looking things up on Google, say to cheat for a test. But, perhaps fundamentally different from these, is how we use AI to problem solve.
I'm always using ChatGPT to check my scripts. I do a ton of data cleaning and things like parsing eyetracking data, which (for me) can get really confusing. I don't think I would have made as much progress in my PhD so far without the advent of AI to speed up the process of debugging code or helping me turn psuedocode into a usable chunk.
But I think this is actually bad in the long run. I've been playing with the idea in my head of doing no-AI November, and I think I will commit to it. Today I had to do a lot of scripting in R, and I tested out not using ChatGPT for anything. Yes it did take me 5x as long to make a new plot that I have never made before, and a miser would think that is proof of how useful and necessary AI is in today's world to maintain productivity in the face of speedy competition (whom is also undoubtedly using AI) but! That was also 5x more thinking. And 5x more time spent working on a problem. That means, every time you use ChatGPT - even for something miniscule, like plotting something or writing a simple 10-line for-loop - you are dividing your thinking time by at least half. That has got to be bad for those of us in the business of building knowledge and learning skills.
So here goes no-AI November!
No comments:
Post a Comment