“GPU cluster farms will become part of the fabric of our physical layer, no longer shunned in corners of bitcoin mining, but an integral part of our infrastructure.”
Do you mean what Altman is calling Compute? You say ‘energy’, is there no established name for it yet?
How viable do you think dePIN becomes? (Decentralized Physical Infrastructure Networks) community or regional AI that draws from local idle compute? Would the corporate world have any pressures to go this direction with it, or will they continue to try and centralize?
I try to be specific in my wording because specifically GPU clusters are expensive and have been used sporadically and for only short term, timeboxed projects.
However, the change is radical. GPU clusters are becoming a permanent addition to our tech stack. The way companies use databases to store/retrieve data, they now use AIs for querying, decision making, recommendations. Small models like Llama7b can be fitted to a single GPU, but we are getting fast to models that need more GPUs to be fitted on - e.g. 70-80B param models.
Inferrence can be done on CPUs (timeseries predictions), GPUs (regular query of an AI models like LlamaIndex, image generation, video/audio), LPUs (language generation)
Point being - companies will purchase "compute" of this type on a permanent basis - no longer just as special projects.
Energy - this is a big one. Watch the NVIDIA GTC keynote session from March 2024 (2 months ago) - the topic of power consumption was huge and a driving force behind the next generation GPUs.
AI is very power hungry. I think instead of tokens, people should pay per wattage used. It is a better measure of the true cost of using AI.
dePIN (and this is my very personal opinion) will not displace centralized Infra, but it will grow in parallel. So, it will evolve in parallel, because 1) it can be significantly better priced, 2) not everything in AI must be done within 1 second, so I can wait for better priced compute and 3) decentralization is highly desirable.
For AI, running all your company's queries through GPT4/5/x... basically through OpenAI's servers is the worst thing a company can do. Single point of failure, your data is being mined, controll exerted onto you by an outside entity... etc.
Yes, I do. While I think there's a place for large data centers with big power/energy needs, I hope dePINs will evolve beautifully on their own because we need a useful decentralized infra. And if you are talking about devices (like IoT devices you can setup in your home/yard/garage - yes, there's no need for them to be hooked up to a central anything)
The recent paper highlights that AI models heavily rely on pre-existing data, challenging the belief in their innovative capabilities. It's crucial to acknowledge data and model drift, necessitating regular updates. This evolution in AI isn't a bubble—it's akin to the irreversible shift from horses to automobiles.
What do you think about the new advances in science that will happen due to AI and provide some great new data? Can the paradigm shift outpace the requirements for data? I haven’t seen this taken into account.
Advancements in AI in science will generate vast amounts of data, but accompanying technical progress will also bring management challenges, such as data quality and security issues.
“GPU cluster farms will become part of the fabric of our physical layer, no longer shunned in corners of bitcoin mining, but an integral part of our infrastructure.”
Do you mean what Altman is calling Compute? You say ‘energy’, is there no established name for it yet?
How viable do you think dePIN becomes? (Decentralized Physical Infrastructure Networks) community or regional AI that draws from local idle compute? Would the corporate world have any pressures to go this direction with it, or will they continue to try and centralize?
Hey James, thank you for messaging.
Compute to me is CPU, GPU or LPU....
I try to be specific in my wording because specifically GPU clusters are expensive and have been used sporadically and for only short term, timeboxed projects.
However, the change is radical. GPU clusters are becoming a permanent addition to our tech stack. The way companies use databases to store/retrieve data, they now use AIs for querying, decision making, recommendations. Small models like Llama7b can be fitted to a single GPU, but we are getting fast to models that need more GPUs to be fitted on - e.g. 70-80B param models.
Inferrence can be done on CPUs (timeseries predictions), GPUs (regular query of an AI models like LlamaIndex, image generation, video/audio), LPUs (language generation)
Point being - companies will purchase "compute" of this type on a permanent basis - no longer just as special projects.
Energy - this is a big one. Watch the NVIDIA GTC keynote session from March 2024 (2 months ago) - the topic of power consumption was huge and a driving force behind the next generation GPUs.
AI is very power hungry. I think instead of tokens, people should pay per wattage used. It is a better measure of the true cost of using AI.
dePIN (and this is my very personal opinion) will not displace centralized Infra, but it will grow in parallel. So, it will evolve in parallel, because 1) it can be significantly better priced, 2) not everything in AI must be done within 1 second, so I can wait for better priced compute and 3) decentralization is highly desirable.
For AI, running all your company's queries through GPT4/5/x... basically through OpenAI's servers is the worst thing a company can do. Single point of failure, your data is being mined, controll exerted onto you by an outside entity... etc.
I just want my dePIN, neighbours I know, and a lovely communal garden. You know what I mean? Most radical community action we could possibly focus on.
Yes, I do. While I think there's a place for large data centers with big power/energy needs, I hope dePINs will evolve beautifully on their own because we need a useful decentralized infra. And if you are talking about devices (like IoT devices you can setup in your home/yard/garage - yes, there's no need for them to be hooked up to a central anything)
The recent paper highlights that AI models heavily rely on pre-existing data, challenging the belief in their innovative capabilities. It's crucial to acknowledge data and model drift, necessitating regular updates. This evolution in AI isn't a bubble—it's akin to the irreversible shift from horses to automobiles.
What do you think about the new advances in science that will happen due to AI and provide some great new data? Can the paradigm shift outpace the requirements for data? I haven’t seen this taken into account.
Advancements in AI in science will generate vast amounts of data, but accompanying technical progress will also bring management challenges, such as data quality and security issues.