There is one big issue I see with such programs that has me worried. It is clear that these tools are powerful, and how much self-awareness we can imbue into them is a matter of curiosity. But if we have a bot that can essentially do things for us, there is less incentive to learn things on a deep level. You mention they are great "if you know what you are doing", which is a key point. When do we know what we are doing, really?
It seems that, like previous technologies, productivity boost is the first response, which is met by more expectations/needs. Regarding such purely computational needs is not too big of an issue (except perhaps for things like bitcoin), but we are already seeing how fast the tech scene, especially AI, is moving. Thus, the limits of our development are not technological in nature anymore, but biological, meaning that ChatGPT, while super useful, is powering this increase in complexity couple times over (according to these LinkedIn influences) and discouraging deep knowledge in things.
This means that we are, as individuals, incapable of keeping up with the speed of development of these areas, which threatens society not by our laziness to learn, but by our biological limits of learning things. In hypertechnological societies the underlying complexity of things is beginning to be lost on us, which means that soon enough, none of us will understand how anything works, which is a dangerous position to be in.
On biological limits: Would it be reasonable argue that a composition of generative AI forms - Vision, Audio, Text - could in principle usher in the next evolutionary form of the Homo Sapiens? I really do not wish to be speculative here, but such is the nature of the field we are in. This next evolutionary leap would then be a result of a hyper-efficient data compression achievement.
On Self-Awareness: By "knowing what we are doing", I mean it in a purely functional sense. You want to implement a set of functions. The inputs and outputs to each of these functions are clearly defined. Then the outputs suggested by such generative models can be easily verified. It would be interesting to ask what happens when inputs and outputs are not clearly defined, or the objectives are more open ended.
On Understanding: I think it is important to develop tools that allow us to explain these models better. This is an active field of research in ML as I'm sure you are already aware. I have recently seen a bunch of postdoctoral research positions open up at a variety of think tanks that wish to do exactly this - What exactly have these large pre-trained models learnt? At a larger scale, if you think about the human being also as a black box (in many ways it still is), then in principle societally we would not have issues accommodating other black boxes, regardless of whether we understand them or not. It is an unsettling thought, I agree.
Specifically with respect to co-pilot (probably will discuss ChatGPT later), would like your thoughts on this cut: Co-pilot is the natural evolution in the general history of programming towards higher abstractions in manipulating chips (assembly, low level, high level languages, less obtuse high level languages, snippet-helper natural language Co-pilot, broader module level Co-pilot, complete project level Co-pilot, narration in natural language by someone with no exposure to programming, brain interface (!)...).
Re: Jobs I think the surprise is in the fact that it seems like the jobs thought to be most AI-secure are threatened to a larger degree than plumbing.
On higher abstractions: This is true but I think one can push this thought further. The higher order abstraction for Co-Pilot also has a marked difference between the increasing abstractions of the previous decades - Following your natural evolution analogy, Co-Pilot has acquired communication. Fortran does not talk to you. Nor did C, nor Python (but one clearly sees the shift towards more and more natural language syntax). Co-Pilot can. Following this, the 'narration in natural language...' part of your response would be the human-computer interface. Since GPT has trained on nothing but human acquired data at points in time, this interface will be a communication with our past, present, future, all at once!
On Jobs: This is also true as simulating a full scale blue collar worker requires simulating joints, muscles, data transmission between hubs, common sense, etc. Simulating a while collar worker (and this could be an unfair generalisation, I am not sure) requires only simulating language. This is far easier to do. The mind frontier, it seems, will be achieved before the body frontier. In more practical terms, data science may become in 10 years what IT has become today - It will simply not be enough to be a beginner Pandas, Numpy, PyTorch analyst anymore.
Aside: I like this AI-secure metric. What are jobs with high AIS scores? What are jobs with low AIS scores? How are these scores evaluated?
There is one big issue I see with such programs that has me worried. It is clear that these tools are powerful, and how much self-awareness we can imbue into them is a matter of curiosity. But if we have a bot that can essentially do things for us, there is less incentive to learn things on a deep level. You mention they are great "if you know what you are doing", which is a key point. When do we know what we are doing, really?
It seems that, like previous technologies, productivity boost is the first response, which is met by more expectations/needs. Regarding such purely computational needs is not too big of an issue (except perhaps for things like bitcoin), but we are already seeing how fast the tech scene, especially AI, is moving. Thus, the limits of our development are not technological in nature anymore, but biological, meaning that ChatGPT, while super useful, is powering this increase in complexity couple times over (according to these LinkedIn influences) and discouraging deep knowledge in things.
This means that we are, as individuals, incapable of keeping up with the speed of development of these areas, which threatens society not by our laziness to learn, but by our biological limits of learning things. In hypertechnological societies the underlying complexity of things is beginning to be lost on us, which means that soon enough, none of us will understand how anything works, which is a dangerous position to be in.
You make very interesting points.
On biological limits: Would it be reasonable argue that a composition of generative AI forms - Vision, Audio, Text - could in principle usher in the next evolutionary form of the Homo Sapiens? I really do not wish to be speculative here, but such is the nature of the field we are in. This next evolutionary leap would then be a result of a hyper-efficient data compression achievement.
On Self-Awareness: By "knowing what we are doing", I mean it in a purely functional sense. You want to implement a set of functions. The inputs and outputs to each of these functions are clearly defined. Then the outputs suggested by such generative models can be easily verified. It would be interesting to ask what happens when inputs and outputs are not clearly defined, or the objectives are more open ended.
On Understanding: I think it is important to develop tools that allow us to explain these models better. This is an active field of research in ML as I'm sure you are already aware. I have recently seen a bunch of postdoctoral research positions open up at a variety of think tanks that wish to do exactly this - What exactly have these large pre-trained models learnt? At a larger scale, if you think about the human being also as a black box (in many ways it still is), then in principle societally we would not have issues accommodating other black boxes, regardless of whether we understand them or not. It is an unsettling thought, I agree.
Specifically with respect to co-pilot (probably will discuss ChatGPT later), would like your thoughts on this cut: Co-pilot is the natural evolution in the general history of programming towards higher abstractions in manipulating chips (assembly, low level, high level languages, less obtuse high level languages, snippet-helper natural language Co-pilot, broader module level Co-pilot, complete project level Co-pilot, narration in natural language by someone with no exposure to programming, brain interface (!)...).
Re: Jobs I think the surprise is in the fact that it seems like the jobs thought to be most AI-secure are threatened to a larger degree than plumbing.
On higher abstractions: This is true but I think one can push this thought further. The higher order abstraction for Co-Pilot also has a marked difference between the increasing abstractions of the previous decades - Following your natural evolution analogy, Co-Pilot has acquired communication. Fortran does not talk to you. Nor did C, nor Python (but one clearly sees the shift towards more and more natural language syntax). Co-Pilot can. Following this, the 'narration in natural language...' part of your response would be the human-computer interface. Since GPT has trained on nothing but human acquired data at points in time, this interface will be a communication with our past, present, future, all at once!
On Jobs: This is also true as simulating a full scale blue collar worker requires simulating joints, muscles, data transmission between hubs, common sense, etc. Simulating a while collar worker (and this could be an unfair generalisation, I am not sure) requires only simulating language. This is far easier to do. The mind frontier, it seems, will be achieved before the body frontier. In more practical terms, data science may become in 10 years what IT has become today - It will simply not be enough to be a beginner Pandas, Numpy, PyTorch analyst anymore.
Aside: I like this AI-secure metric. What are jobs with high AIS scores? What are jobs with low AIS scores? How are these scores evaluated?