Software engineers will be obsolete in 5 years. Just kidding. But a lot of AI gurus are already saying that. The truth is far from that, but they are not entirely wrong.
Yes I know I am late in looking at these tools, but I find it best to let the dust and hype cycle settle before taking a look.
Hello readers and avatars. I know its been a while. I had a bout of COVID that took me out of commission. Lets get back into the swing of things.
The Truth About CoPilot and ChatGPT
By now, if you haven't heard of Copilot or ChatGPT, I'd assume you've been living under a rock or perhaps just woke up from a long coma. Copilot is GitHub's brainchild, designed primarily to assist programmers in coding and software development. As of now, it doesn't have the same chat functionality as ChatGPT. However, with beta testing underway, we can anticipate this feature in the near future.
What do they actually do is generate code suggestions based on the context of your code. Let’s check some examples from Copilot:
After typing a comment of what I wanted to do copilot gave me a working implementation of finding the factorial of a number. Isn’t that handy.
Let’s say I import Pytest:
Unfortunately, python testing typically occurs in an additional file, that context is lost in copilot, however something ChatGPT does ok on:
It is interesting to start getting an idea on what copilot is good on and what chatgpt is good at. Me being spoiled I have access to both.
Software Engineers Laid Off?!
Many AI enthusiasts argue that the age of coders and engineers is coming to an end, suggesting that with the advent of sophisticated AI tools, virtually anyone can become a coder. While these tools have indeed democratized access to programming, it's essential to note that quality often comes at a price. As with many things in life, you typically get what you pay for. Seeking shortcuts or bargains in every aspect can lead to compromising on the end product's quality.
If an AI tool like ChatGPT consistently produces more accurate code than a human, it may indicate gaps in that individual's skill set. While AI can be a tremendous supportive tool, it doesn't negate the need for human expertise. There's a potential reality where a company that once required ten engineers might operate with six, thanks to tools like ChatGPT and Copilot. However, predicting mass layoffs of engineers seems overreaching. Long-time readers might recall the emphasis we've placed on personal branding and salesmanship. These are still invaluable skills for engineers in this evolving landscape.
Decision-makers predominantly lean towards human insight when it comes to crucial decisions and recommendations. Trust is built upon accountability. If decisions are exclusively automated and driven by AI, the question arises: Who takes responsibility when things go awry? Human accountability remains crucial. When mistakes happen, there needs to be discernible responsibility, which is hard to allocate if AI is solely at the helm.
How to use Copilot and ChatGPT
Top-tier engineers are not just using tools like ChatGPT and Copilot; they are leveraging them to optimize their time. These AI tools excel at generating boilerplate code for foundational functions. Beyond just the basic code, they can supply preliminary tests, craft initial documentation, and add explanatory comments. However, a point to note is that ChatGPT can occasionally be verbose in its commenting, which may require some manual pruning. (Chatgpt definetely over-comments)
It's crucial to approach these tools with discernment. Handing them large chunks of vague or ill-defined requirements can lead to unintended consequences. They might start to conjure up impractical APIs or, even worse, produce code that's riddled with bugs or lacks proper security measures.
One of the standout benefits of these AI aids is their ability to generate comprehensive test suites and preliminary design documentation. This can significantly accelerate the development phase, freeing up engineers to focus on more intricate and high-level tasks. By offloading the tedious boilerplate work onto these tools, engineers can channel their energy and expertise into tackling more complex challenges, such as refining software architecture, perfecting pivotal implementations, or engaging in thorough code reviews. With these foundational tasks streamlined, engineers can then explore more lucrative and innovative endeavors, tapping into the potential of wifi money or passive income streams in the digital realm.
Another area where these tools can be helpful is being a second eye for code reviews. Taking a look at an example:
These AI tools, such as ChatGPT and Copilot, can function as a supplementary pair of eyes, meticulously scanning your work to ensure no detail is overlooked. They excel at identifying and suggesting obscure edge cases which might not immediately come to mind. After the tool pinpoints these scenarios, it's up to you as the developer to determine their relevance and decide whether it's crucial to address them. In some instances, ChatGPT has even surpassed expectations by offering a more secure coding solution than initially provided.
However, it's pivotal to remember that ChatGPT isn't a magic bullet. While it can be insightful and helpful, it doesn't always yield a production-ready solution. The standards and requirements for code destined for a live production environment are inherently more rigorous and demanding compared to those of a casual hobby project. It's crucial to exercise discernment and conduct thorough reviews before integrating AI-generated solutions into critical systems.
AI is not God
To those vigilant and discerning, it should be clear: AI, though advanced, is far from infallible. Mistakes are part and parcel of AI outputs, and there are instances when it can provide erroneous information, misconstrue facts, or even invent data. At its core, AI is a reflection of human knowledge and our existing databases. Since human knowledge can be riddled with inaccuracies and misconceptions, AI, trained on this data, is bound to replicate or even amplify these errors. In essence, AI embodies the adage "to err is human." As a reminder: don't take everything you see online at face value, even if it's generated by seemingly sophisticated AI.
The concerns raised by advocates of AI safety echo this sentiment. When they caution that "the AI could be wrong," it's not just a theoretical or distant worry. Humans, even experts in their fields, often hold or disseminate incorrect views, misunderstandings, or outdated information. AI, learning from such humans, is therefore naturally predisposed to errors. It's essential, then, not to place undue trust in AI outputs without rigorous verification and critical evaluation.
Conclusion
In an age where technology is rapidly advancing, tools like AI are at the forefront, revolutionizing how we approach tasks and offering seemingly limitless possibilities. However, as with any tool, AI has its constraints. Its proficiency is deeply rooted in human knowledge, which, being imperfect, introduces the potential for errors in AI outputs. While AI can act as an invaluable assistant, enhancing efficiency and illuminating overlooked details, it's crucial to approach its suggestions with discernment. Whether it's coding, information dissemination, or any other application, a balanced perspective is essential: celebrate AI's capabilities but remain vigilant, critically evaluating its outputs. After all, in a world increasingly reliant on digital information, a discerning mind is our best defense against misinformation, be it human or machine-generated.
-celt