Houdini 20
27014 66 5- LukeP
- Member
- 372 posts
- Joined: 3月 2009
- Online
- oldteapot7
- Member
- 111 posts
- Joined: 1月 2018
- Offline
AI produces "repetitive and predictable" result only if promts are generic and lame. Seriously if you know how to make art traditional way you will know what to type as prompt. but every artist should keep it to themself. its like with painting basicly. thats why iam not worry that not skilled artist will be better than talented ones. Same will apply to 3D. knowlage is the key.
- Sygnum
- Member
- 119 posts
- Joined: 8月 2015
- Offline
GCharb
Like I said, I have different peoples using different AI software posting images to my FB page every day
You can repeat your mantra forever but I`m still seeing more and more cases where illustrators, photographers, voice talents etc. have been replaced - partially and fully, on tiny budgets but also big budget projects for coca cola etc.
Edited by Sygnum - 2023年9月5日 13:15:01
- GCharb
- Member
- 279 posts
- Joined: 6月 2016
- Offline
SygnumWow, an insult, I never thought I'd get one of those on that forum, an opinion is not a mantra, it is just that, an opinion!
You can repeat your mantra forever but I`m still seeing more and more cases where illustrators, photographers, voice talents etc. have been replaced - partially and fully, on tiny budgets but also big budget projects for coca cola etc.
The picture I shared was made only a few days ago by someone who apparently spends her days making AI images, look at the hands, they are all crooked, and she says that she was never able to make proper hands with AI, no one who ever posted an AI images on my FB page was ever able to make proper hands, ever, so again, I never saw, to this day, an AI image that had proper hands, whether be an animation, or a still, so insult me as much as you like, it won't change that!
Edited by GCharb - 2023年9月5日 13:27:59
- Sygnum
- Member
- 119 posts
- Joined: 8月 2015
- Offline
GCharbSygnumWow, an insult, I never thought I'd get one of those on that forum, an opinion is not a mantra, it is just that, an opinion!
You can repeat your mantra forever but I`m still seeing more and more cases where illustrators, photographers, voice talents etc. have been replaced - partially and fully, on tiny budgets but also big budget projects for coca cola etc.
The picture I shared was made only a few days ago by someone who apparently spends her days making AI images, look at the hands, they are all crooked, and she says that she was never able to make proper hands with AI, no one who ever posted an AI images on my FB page was ever able to make proper hands, ever, so again, I never saw, to this day, an AI image that had proper hands, whether be an animation, or a still, so insult me as much as you like, it won't change that!
ROFL. Debunking wrong statements are now "insults" :-D
Your "opinion" is still wrong. You didn`t say which generator your friend used because - according to some users and the midjourney developers, at least midjourney V5 approached that factual issue to some extent and tried to fix the hand problem: https://www.aibloggs.com/post/midjourney-v5-release [www.aibloggs.com]
It`s what I see when searching online, too - in lots and lots of pictures. But please, stay with your "opinion" and spread false information. I rather prefer to see the reality so I can approach the problem because A.I. is and will be a exacerbating problem for us if we stay conniving and don`t acknowledge it.
Edited by Sygnum - 2023年9月5日 13:51:15
- SWest
- Member
- 313 posts
- Joined: 10月 2016
- Offline
If anyone feel unsettled by any of my shared thoughts about the future of CG, I do apologize.
Edit: In a society where there is free speech and opinions there will be tension and you will meet resistance to your own ideas. It must be like that. Having opposing views does not need to imply disrespect, but everyone has the right simply not to agree.
Here’s a more scientific approach: text to 3D [arxiv.org]
If you are interested here's also a link to a Stanford report about the state of AI: AI Index Report 2023 [aiindex.stanford.edu]
Edit: In a society where there is free speech and opinions there will be tension and you will meet resistance to your own ideas. It must be like that. Having opposing views does not need to imply disrespect, but everyone has the right simply not to agree.
Here’s a more scientific approach: text to 3D [arxiv.org]
If you are interested here's also a link to a Stanford report about the state of AI: AI Index Report 2023 [aiindex.stanford.edu]
Edited by SWest - 2023年9月5日 16:09:11
Interested in character concepts, modeling, rigging, and animation. Related tool dev with Py and VEX.
- spoogicus
- Member
- 44 posts
- Joined: 2月 2009
- Offline
- SWest
- Member
- 313 posts
- Joined: 10月 2016
- Offline
spoogicus
before this thread degrades
Half of the OP question was answered long ago "Houdini 20 release will be in November 2023.", however now we are in the "other stuff part". What could that mean?
I think the lock threat pertains to uncivilized behavior, but not free speech I presume.
Cheers!
Interested in character concepts, modeling, rigging, and animation. Related tool dev with Py and VEX.
- oldteapot7
- Member
- 111 posts
- Joined: 1月 2018
- Offline
However, what is probably feasible is to have some machine learning dig through the various learning materials that is already made available (for free) and combine that will all the available documentation. Based on that it could be possible to do simple searches for tasks and generic solutions and quickly find suitable guidelines.
This is such a great idea!
i see it as help 2.0 and personal teacher. it could be used as node connection generator too.
it could be achieved in relative easy way since ChatGPT already knows how to code including code in VEX. Recently OpenAI made avaible (or will) enterprise version of ChatGPT 4 that is possibble to fine tune with own data. So SideFX could train it with more source code of Houdini and other VEX code. Also add all YT training tutorials and learning materials, all Houdini Forums and FB fanpages all PDFs and Houdinis documentation and help files, Wiki CG or whatever it was called. Basicly everything to safly fine tune GPT.
Make "experimental" help copilot in Houdini. and gather all user question prompts.
ask finetuned ChatGPT 4.0 to generate lots of n00b question how to learn Houdini and other specific question fe. how to make realistic looking explosions of gasoline fueal etc.
collect all data from GPT4 and then train other LLM with output data.
since ChatGPT 4 is expensive its better to use Meta's Llama2 wich is free to use even for commercial use.
That way we could have help 2.0. so one fine tuned and expensive AI model could generate training data for another wich is free one and could run loccaly on PC. Plus Llama 2 can understand 300 languages! and it could train humans in their native languages.
and continusly gather question prompts from users - and frequently update help AI model for download locally.
it could be combined with some dedicated open source AI model for coding and so on.
Learning Houdini could be even more fun that way. and faster.
Edited by oldteapot7 - 2023年9月5日 17:52:10
- Sygnum
- Member
- 119 posts
- Joined: 8月 2015
- Offline
I think the great challenge all software vendors, be it for 2D/3D graphics, audio, video etc. are facing is how they stay relevant in a world where there's almost no effort and no man power required anymore to generate images/videos/sound.
There's of course a gigantic difference in the level of control, with Houdini being a super fine grained DCC which allows control over absolutely everything vs. generative art/content, which allows very little control (right now).
So the main question would be which tools, processes and associated changes will be required to get to the output as fast as possible in Houdini - and a.i. accelerated processes will be a big part in this. This way, there would be a system which allows fast but still art directable outputs.
There's of course a gigantic difference in the level of control, with Houdini being a super fine grained DCC which allows control over absolutely everything vs. generative art/content, which allows very little control (right now).
So the main question would be which tools, processes and associated changes will be required to get to the output as fast as possible in Houdini - and a.i. accelerated processes will be a big part in this. This way, there would be a system which allows fast but still art directable outputs.
Edited by Sygnum - 2023年9月6日 03:39:07
- SWest
- Member
- 313 posts
- Joined: 10月 2016
- Offline
The most time consuming part is when doing something new and it does not work.
Actually building networks is not so bad once you know how. There are also templates supported in the shelf.
Edit: I'd like to do the art part myself. If I wanted to do text prompts I'd be a script writer or a book author. The boring tedious repetitive things a machine should be super happy to do.
Actually building networks is not so bad once you know how. There are also templates supported in the shelf.
Edit: I'd like to do the art part myself. If I wanted to do text prompts I'd be a script writer or a book author. The boring tedious repetitive things a machine should be super happy to do.
Edited by SWest - 2023年9月6日 16:17:50
Interested in character concepts, modeling, rigging, and animation. Related tool dev with Py and VEX.
- khomatech
- Member
- 16 posts
- Joined: 1月 2022
- Offline
d3dworld
in my personal opinion i dont wish for AI to evolove that much , it must be limited to some extent , never a bot will replace artists and their long learning journey ,its just unfair
If it won't replace artists then why do you want it to be limited? What's "unfair"? Was the steam engine unfair to horses?
Anyway, H20, I'd like ramps to work within the VOP context again!
- oldteapot7
- Member
- 111 posts
- Joined: 1月 2018
- Offline
In other H20 rumours thred somone found that SideFX could be thinking of refreshing 2D nodes (i havnt even touched them yet so i could be completly wrong) i dont even remember how those nodes were called but i mean one that can do compositing like Nuke or AfterFX.
Do you guys think that some alpha or beta of those could be in H20? And how you imagine or wish it will look like and what feautures itll have?
personally i hope that it will push for some UI/UX changes. by this i mean UI more similar to cascadour. therefor working with animation or camera seqencer could be easier too. (a bit like AfterFX keys field right below 3D view with lots of options on organizing, filering, labeling keys. Great for animators as well)
another one is that i could import reference footage of lates say face. quicly edit it, retime, cut etc. then use AI similar to Nukes CopyCat and translate 2D animation into 3D model of face (2D drives 3D using AI ML) so.. any correction from client i will do in 2D part (like lipsync) will be automaticly translate into 3D model.
so i hope that any tools for manipulating video will come as one of the first. this include all 2D warps, morphs, streches, time manipulation, key inerpolation for generating inbetweans. Also 2D/3D tracking so i could connect lates say corner of mounth to courner of mounth in 3D object.
Also all 2D AI models like Meta's (Facebook) segmentation, facial landmarks and all others
But i could understand it all wrong so this could be not that what i thought (Somethink like Nukes and AfterFX in Houdini) That would be cool anyway
Do you guys think that some alpha or beta of those could be in H20? And how you imagine or wish it will look like and what feautures itll have?
personally i hope that it will push for some UI/UX changes. by this i mean UI more similar to cascadour. therefor working with animation or camera seqencer could be easier too. (a bit like AfterFX keys field right below 3D view with lots of options on organizing, filering, labeling keys. Great for animators as well)
another one is that i could import reference footage of lates say face. quicly edit it, retime, cut etc. then use AI similar to Nukes CopyCat and translate 2D animation into 3D model of face (2D drives 3D using AI ML) so.. any correction from client i will do in 2D part (like lipsync) will be automaticly translate into 3D model.
so i hope that any tools for manipulating video will come as one of the first. this include all 2D warps, morphs, streches, time manipulation, key inerpolation for generating inbetweans. Also 2D/3D tracking so i could connect lates say corner of mounth to courner of mounth in 3D object.
Also all 2D AI models like Meta's (Facebook) segmentation, facial landmarks and all others
But i could understand it all wrong so this could be not that what i thought (Somethink like Nukes and AfterFX in Houdini) That would be cool anyway
Edited by oldteapot7 - 2023年9月7日 07:42:23
- coccarolla
- Member
- 73 posts
- Joined: 8月 2013
- Offline
- CYTE
- Member
- 708 posts
- Joined: 2月 2017
- Offline
- raistlinf
- Member
- 19 posts
- Joined: 11月 2014
- Offline
- mandrake0
- Member
- 644 posts
- Joined: 6月 2006
- Offline
oldteapot7
In other H20 rumours thred somone found that SideFX could be thinking of refreshing 2D nodes (i havnt even touched them yet so i could be completly wrong) i dont even remember how those nodes were called but i mean one that can do compositing like Nuke or AfterFX.
Do you guys think that some alpha or beta of those could be in H20? And how you imagine or wish it will look like and what feautures itll have?
personally i hope that it will push for some UI/UX changes. by this i mean UI more similar to cascadour. therefor working with animation or camera seqencer could be easier too. (a bit like AfterFX keys field right below 3D view with lots of options on organizing, filering, labeling keys. Great for animators as well)
another one is that i could import reference footage of lates say face. quicly edit it, retime, cut etc. then use AI similar to Nukes CopyCat and translate 2D animation into 3D model of face (2D drives 3D using AI ML) so.. any correction from client i will do in 2D part (like lipsync) will be automaticly translate into 3D model.
so i hope that any tools for manipulating video will come as one of the first. this include all 2D warps, morphs, streches, time manipulation, key inerpolation for generating inbetweans. Also 2D/3D tracking so i could connect lates say corner of mounth to courner of mounth in 3D object.
Also all 2D AI models like Meta's (Facebook) segmentation, facial landmarks and all others
But i could understand it all wrong so this could be not that what i thought (Somethink like Nukes and AfterFX in Houdini) That would be cool anyway
there was noted that the viewport will be Vulkan alone that task is big for the UX Team. next year the VFX Platform [vfxplatform.com] will be updated to QT 6.5.X that could bring some UX improvement.
there was Rumors for 2D Compositing that could come to H20 and some talks years ago. Video Editing NLE would be a nice but it would be a task for a later release.
all the beta's (tissue, kineFX, Karma XPU, ...) should be production ready!
what i like more is that the SOP's 2.0 / invoke compile graph could change how the houdini HScript core can be removed and maybe a good bye HIP?
AI is a nice topic and with MLOPS [github.com] we have got at least a playground for experiments and understand how it could be used by Artist's.
we are 2 Month's way for H20 we need to wait, everything will be fine
- habernir
- Member
- 94 posts
- Joined:
- Offline
mandrake0oldteapot7
In other H20 rumours thred somone found that SideFX could be thinking of refreshing 2D nodes (i havnt even touched them yet so i could be completly wrong) i dont even remember how those nodes were called but i mean one that can do compositing like Nuke or AfterFX.
Do you guys think that some alpha or beta of those could be in H20? And how you imagine or wish it will look like and what feautures itll have?
personally i hope that it will push for some UI/UX changes. by this i mean UI more similar to cascadour. therefor working with animation or camera seqencer could be easier too. (a bit like AfterFX keys field right below 3D view with lots of options on organizing, filering, labeling keys. Great for animators as well)
another one is that i could import reference footage of lates say face. quicly edit it, retime, cut etc. then use AI similar to Nukes CopyCat and translate 2D animation into 3D model of face (2D drives 3D using AI ML) so.. any correction from client i will do in 2D part (like lipsync) will be automaticly translate into 3D model.
so i hope that any tools for manipulating video will come as one of the first. this include all 2D warps, morphs, streches, time manipulation, key inerpolation for generating inbetweans. Also 2D/3D tracking so i could connect lates say corner of mounth to courner of mounth in 3D object.
Also all 2D AI models like Meta's (Facebook) segmentation, facial landmarks and all others
But i could understand it all wrong so this could be not that what i thought (Somethink like Nukes and AfterFX in Houdini) That would be cool anyway
there was noted that the viewport will be Vulkan alone that task is big for the UX Team. next year the VFX Platform [vfxplatform.com] will be updated to QT 6.5.X that could bring some UX improvement.
there was Rumors for 2D Compositing that could come to H20 and some talks years ago. Video Editing NLE would be a nice but it would be a task for a later release.
all the beta's (tissue, kineFX, Karma XPU, ...) should be production ready!
what i like more is that the SOP's 2.0 / invoke compile graph could change how the houdini HScript core can be removed and maybe a good bye HIP?
AI is a nice topic and with MLOPS [github.com] we have got at least a playground for experiments and understand how it could be used by Artist's.
we are 2 Month's way for H20 we need to wait, everything will be fine
Do you mean that vulkan viewport won't be in Houdini 20? Or that is assumption
- coccarolla
- Member
- 73 posts
- Joined: 8月 2013
- Offline
- ajz3d
- Member
- 572 posts
- Joined: 8月 2014
- Offline
-
- Quick Links