AI Generated Models - Hype or Reality ?
I have seen discussions around "AI generated models", but I have yet to actually interact with an AI program that can do this.
My interpretation of an "AI generated model" in this case is:
- An STL file (bonus for GCODE)
- Generated by an LLM like ChatGPT or Claude or Gemini
- From a prompt, either text-based or image based
- Which can be sliced and printed on a standard FDM printer
I see some reddit posts, some youtube videos, and some pop-up websites that claim that this is possible, but I have yet to see a package I can actually get my hands on to use and test the claims.
My questions for this forum:
1) Has anyone used an AI model generator (AI model as defined above) ?
2) Which one ?
3) What was your result ?
Prusa Core One, MK4S w/ MMU3 (formerly MK4 / MMU3, MK3S+/MMU2), 2 Prusa MINI+, Octoprint. PETG, PVB, (some) PLA.
RE: AI Generated Models - Hype or Reality ?
1) yes
2) proprietary, cannot share, not to mention it is not publicly available at all
3) mixed, mainly image to model, can be also text to image and then image to model, this was just a small fun internal project.
Generally its doable and a matter of time and money to train the model
See my GitHub and printables.com for some 3d stuff that you may like.
RE: AI Generated Models - Hype or Reality ?
Pics or it didn't happen 😉
Prusa Core One, MK4S w/ MMU3 (formerly MK4 / MMU3, MK3S+/MMU2), 2 Prusa MINI+, Octoprint. PETG, PVB, (some) PLA.
RE:
Adding to the above reply, which reads as unnecessarily snarky.
I am still skeptical about the ability of LLMs to generate usable 3D models. In my experience, they represent a revolution in semantic interpretation, which should not be minimized - but semantics only go so far. Put another way, a neural network with sufficiently many inputs can generate text that looks "right-ish", or pictures that look "right-ish" - or even code that runs sometimes - but a 3D structure either stands or it doesn't, mechanical components either mesh or they don't. They need to be right, not right-ish.
I have seen claims that models can be generated from LLMs writing OpenSCAD code. Having used the Github Co-Pilot against the Fusion360 API, what I have seen is that some of what it creates is usable, some is obviously wrong, and some looks right without actually being right. There is a huge difference between math.log_10() and math.log10(). In my experience, there is some time efficiency introduced in the coding process, but only if you can correct the output real time.
I don't see how you would ever be able to use this method to prompt the system to write a script to generate a model and then get an actual model on the other side without interference. Even with billions of lines of code as input and thousands of hours of training, the darn thing is still just predicting next sets of tokens from a given set of tokens using a statistical model. Any error in the prediction process will result in an unusable or unworkable model.
Mind you, I am open to being proved wrong. I just... need proof.
Prusa Core One, MK4S w/ MMU3 (formerly MK4 / MMU3, MK3S+/MMU2), 2 Prusa MINI+, Octoprint. PETG, PVB, (some) PLA.
RE: AI Generated Models - Hype or Reality ?
I cannot provide you with any proof because as I said it is proprietary, I can say only generic stuff, unfortunately. The only thing I can say it was a request from one of the customers that stores certain data in some data platform.
Generally speaking it was just a set of agents (custom built and trained models on private data) which were trained from models to 3d renders and vice versa - from 3d renders or photos to model. Mix it with photogrammetry and laser scans, original design documents and you can have a warehouse of the mappings between the real objects to 3d models and model definitions. You create dedicated ML models and validation models, and this way you can have a tool for faster object generation.
GPT models are in different domain and are just used in different context. The context is just different and thus is a core of creating a good model from the ground, and you require proper data to train it. That's why generic gpt models suck at certain tasks and need to be retrained to improve them. But in certain situations you need to start from different model from scratch and tou ned insane amount of clean data.
Lets say that the effect is similar to the Google Veo3 or Google Genie 3 in video or wold creation. Now swap it to the idea of model design. Now add automatic model validation based on the for example fluid simulation, and you auto iterate with the model creation and validation loop. So you start with a basic almost right model and you force run validation checks and regenerate thousands of models to with different small variations untile you fix those imerfections. Atwr reaching certain point you are ready to have a model for prototyping and you can fine tune it manually. Similiar things happen already in coding like CoPilot.
See my GitHub and printables.com for some 3d stuff that you may like.