Wow, this is embarrassing asking for help in such a simple program! *looks at feet*
In most of the images that I turn out (although I haven't posted one yet), there are tons of little blips that make it look grainy instead of smooth. I guess what I am looking for is a more 3D feel without spending a year to learn how to use one of those programs and a week to make each image. Could anybody either offer me some advice or link me to some?
While I am asking stuff, kinda random, but is it okay to enlarge images? The version of Terragen I have onle makes images at 1200 by 960. I have been enlarging them with guilt after reading the "Snapshot Policy"...
As far as the TG question ges, I never blow up my images, including terragen, as it really detracts from the quality too much. If someone wants to stretch it themselves to fit their desktop they can always do that; however for those who have smaller res's the quality won't be compromised.
I agree with Sam about not resizing...we'll do it if we need.
Usually lowering the gamma of the picture can get rid of some of the grainy feel to it...also rendering at a higher image size setting can also help that problem.
I apologize, I do not have Photoshop. I manipulate my images in Micorsoft Digital Image Pro 7.0, and I find it useful for changing shades of the entire picture and adjusting the brightness and contrast to an art (if I had the skill). It can't do all the fun Photoshop stuff like shadows and layers and effects, or at least not nearly to the same extent.
Question: If I enlarge an image and somebody downloads the same size as the original, is the quality affected? If so, I should definately stop resizing them, but as far as I can tell, the quality change for 1600 by 1200 isn't noticable. I don't want to be pestered (in a nice way) by people suggesting I work at a larger size when I am already doing my best...
So if I want an Apophysis Image to look smooth and deep, you think rendering at a size larger than 1600 by 1200 would help, considering I would be shrinking it again anyways? I will try that thing with the gamma. Thanks for the advice from all 3! :-)
The answer to that q is yes stuffnstuff. The site will adjust the size of the pixelated image to fit each resolution so it will still be pixelated. The only way to get around this would be to deliberately d/l a larger res that your screen size and then reduce it which might help make the pixelation less visible; however it still won't look as good as the original, unaltered render.
I think most people on caedes know by now that TG doesn't render at 1600 x 1200, but if it worries you just leave a note saying so when you upload.
1600x1200 should be big enough, seeing as you don't have Photoshop to play around with minor blurring in certain regions. Gamma is probably set too high...either that or quality has been lowered accidently on your renders (for comparison, default [I think] is 2000).
The default certainly wasn't 2,000 on mine! I don't know exactly what it was set at, but I know it was low, because it led me to assume that a quality rating of 100 was the max. Boy was I wrong! I actually had this cleared up last night, but it is VERY difficult for the computer to be alone for a few hours without someone disturbing it or it deciding to randomly restart. Thanks again for all the advice! :-D
Alright, I now know how to render high quality. Anybody know any tips on how to squeeze more out of my machine? :-D
Specs: HP Pavilion running Windows XP Home on a Pentium 4 2.8GHz processor with a measly 512 MB RAM and enough space on the HD to make anything I want (within reason).
Is the processor or the RAM more important? How can I squeeze a bit more out of it? Thanks to all computer wizards/nerds who can help. :-)
As long as your have enough RAM for the image that you're trying to generate, then that isn't going to be the problem. These types of computer applications are almost always CPU bound. Seeing as your processor is probably not going to get much faster any time soon, you best bet would be to split the job into two parts and render then separately on two or more computers. Many 3D rendering programs and some fractal programs have this function built in though that doesn't mean that it would be easy to set up.
Oh wow, I am blind. When you said that, I considered it a lost cause.
Not only did I completely forget about my computer's close cousin in the basement (nearly the same stats) connected by a wireless network, but in my dad's small business, my computer has a couple distant uncles who work togethor with an effective team of standard line dells probably only a year old, all, of course, connected by a rather efficient network. How do I team these masses togethor to use the joint CPU?
Come on. I know there are tons of people, as hard as it may be to believe, nerdier than I am in this matter. Is it possible to to join processing power over a network without touching each individual computer?
The answer is yes, but not on any consumer-level (read: Windows) operating system without additional software that's expressly set up to do it (that I'm aware of). Even then, it takes a fair amount of knowledge to do, so the answer ends up being no. ;-)
Someone google this all up and prove me wrong, I'd love to know about it.
Alright, I give up. How about setting up clusters wired togethor with the additional computer's sole purpose to be a computing slave. Now is there a way to share proccessing power between old junkies instead of buying something many times the price?
Isn't that what we were all talking about a few comments up there? Hooking the hardware up is generally the easy part when you're dealing with and kind of high performance computing system. The challenging part is configuring (or writing) the software that you finally want to run on that hardware. You need to figure out if the software that you are interesting in using has parallel batch processing capability (over a network) built it. Some tasks are easier to parallelize than others. Also, most groups that need to run such compute jobs in parallel either work with software which is very easily scriptable/customizable, or they just write all the software themselves. This is a non-trivial task.
Well, I had imaged that I could use the extra juice for anything (playing , mixin' loops, word processing; all that stat-requiring stuff) without any additional steps, but if that is not the case, the main program I would want to use is what this thread is named after. Thanks again! :-)
I don't think that Apophis is meant to do the final render though. You're sopposed to export the paramenter file for the flame to another program for the full-size render.
If I am correct in my guess, which I may not be considering it was before my time, it uses DOS or something. It is a black box with only grey font inside. The bar at the top says "C:\WINDOWS\system32\cmd.exe". Is this splitable? It seems like it would be because it already contains a minimum of 2 strips per render.
A) Apophysis can self-render a flame file...it can also export it though. The print out you're seeing stuffn(although I don't really remember it) is a window that is meant to output what is going on...mainly for debugging. It's referenced by the program itself.
B) Sharing processors directly won't work very well because of the distance the information needs to travel and the losses that would occur because of this. Creating a hierarchial type system would (like caedes mentioned)need a good deal of programming and therefore take some work.
Well, I have an HP, and fortunately, HP has an excellent customer support program. For half a week, I have been bothering Rupi and Britney, and they are doing extra research for me and refering me to higher groups of people. The link that Mike sent me doesn't help a whole lot for doing anything, but atleast I know what I want to do: just think of one of those fancy clusters of 16 identical, slim-looking computers at expensive businesses, but now take the same idea and build it with one good computer and a couple extras a generation behind.
I'm sure your question was already answered, but here's what I can suggest with my experience with Apophysis.
First of all, you shouldn't use the "Export Flame" option unless you want to render a fractal on a transparent background...they are notoriously grainy which is a problem with the software itself. You should rather press the little Gear looking icon to render flames on a normal Opaque background to get the best possible quality.
You should render with atleast 2 Oversample, I usually do 3.
The Quality (or Sample Density) should be set to atleast 2000 (higher if you can afford to)
Sample radius should either be 0.3 or 0.4, Any lower will be too sharp and have alot of grain, any higher and it will be too blurry.
The size ideally would be 1600x1200, but I usually render them at 3200x2400 to really focus on detail when I manipulate, then size down for the best anti-alias effect.
If you get a memory error, you can limit the ammount of memory is given to Apophysis in this screen, it would take longer but you would still get the quality no matter what system you're on.
utak3r@deviantart has mentioned that writing a "distributed rendering engine" for apophysis is on his to-do list .. this would be like SETI-@-HOME, No? ... but continue inhalation and exhalation while waiting
He also wrote his own version of Apophysis (v2.02h) ... but I advise you steer clear of it .. It BSOD'd my computer and eventually caused the original v2.02 to fail to RUN
but Utak3r is a smart fellow .. he's done many Apo scripts and such (listed on the page linked above)
******************************
by default apo uses a program called HQI found in your apophysis folder to do "Export Flame" .. You can also use FLAM3 found HERE on this Page
drop it into your apo folder and just link it up in your OPTIONS screen ... PATHS tab .. EXPORT-RENDERER box
(C:\..yadda yadda yadda\flam3-render.exe
************************************
You CAN change the "export flame" to do JPEG and not PNG .. click on the folder icon next to the name box on the export flame dialog and dial in JPEG in the drop down menu ... (and set JPEG quality to 100 on the "General" tab of the options box)
*******************
Now .. with your permission .. ;o) ... I'm going to paste a journal entry from Robert Joy (guitarzar) that is no longer available for me to link to
---------------------------------------------
Apophysis Render Settings
Journal Entry: Wed Mar 2, 2005, 9:07 AM
Guys,
I spent a bunch of time trying various setting for rendering large images in Apophysis and wanted to share with you my conclusions.
After much testing and playing around with the various render values, using the Export Flame... function on the file menu, I have determined the following optimal values (trade off between image quality and render time) for doing large renders in Apophysis:
Set your jpeg quality value in the options dialolg to 100 and render your images as jpg. This will ensure your image looks like it does when you use the built-in Apo render tool. If you render the image as png, using the Export Flame... function on the file menu, it will make the background color transparent. If you use PS to add a background of the same color the image will NOT look the same. Dark areas will be even darker.
Quality values above 4000 did NOT give noticable improvements to the image quality.
Filter Radius is tied directly to the image size. For images where the short dimension is 3000 pixels a value of 0.8 worked well. This should be scaled linearly with the image.
Oversampling values above 1 did not give improvements to jaggy edges that merited the additional processing time (which was enormous).
I will be using quality values of 4000 for my large renders and no oversampling (1). This usually results in a less than 24 hour render for images where the short dimension is 3000 pixels and the zoom is less than 1. I will most likely use a guassian blur of 0.5 on the finished render to soften the image and help with any jaggy edges.
To give you a speed reference, my system specs are as follows:
CPU: 3.2Ghz Intel P4 with HT turned on
RAM: 3GB PC3200 DDR 400Mhz Dual
HD: 2-160GB disks in a RAID 0 configuration (cuts R/W time in half)
OS: Windows XP Pro
Hope you find this helpful :D
Robert
Robert Joy
-------------------------------------
I use the settings mentioned above
generally .. 2000 ... point 3 .... 1 ...
I render at 3200x2400 JPEG using the FLAM3 exporter
gamma and brightness at 4 (sometimes 3)
apparently the Render-to-Disk that is built into Apophysis (little gear icon) has a defect that causes the renders to be blurryier by nature and require higher oversampling and such
Robert was in contact with scott draves the author of the FLAM3 renderer and this was proven to his satisfaction (good enough for me)
I have the text of a couple notes concerning this if you're interested
That was helpful. I find it interesting that Robert stated that the quality doesn't improve very much after 4,000. I just finished a render last night that 45,000 quality at Oversample 4. The first 3 strips took a total of 72.3 hours, but I didn't clock the 4th. I wish I had Rob's computer! :-)
I have been conversing with a professor at Duke University and a few of his friends. There is so much information to learn, so progress has been slow. It definately can be done, but not necessarily with Apophysis. If I was to build myself a cluster (also known as a "Do-it-yourself Supercomputer", I would need to put Linex on all the computers involved as well as buy some type of high speed network for the computers to conversate (I love that word) with eachother. I have heard so much theory that I really need someone to spell out what litterally needs to be done. I think I am just about there...
the highest Q setting I've used is 10,000 (i ramped up .. 2k - 5k -10k) .. the first render at this setting (which, as I go to link to it, I find is gone from Caedes ... that's always how I find out) .. Ectoplasmic Gummybear impressed me with the smoothness that I assumed it was the higher setting and I used that for a month or two ... getting 12 - 36 hour renders
but ...
DeviantART has a Print selling division .. and they "approve" the submissions so that quality and, therefore, the reputation of the service is maintained (if you recieve a poor quality, pixelated, jaggie-laden print you'll assume that all prints are that way and not buy anymore)
and, my point is, people are submitting art rendered at 2000 and getting approved ... all I'm doing is making wallpapers ... which, may, or may not see the light of day on your screen for an hour
In most of the images that I turn out (although I haven't posted one yet), there are tons of little blips that make it look grainy instead of smooth. I guess what I am looking for is a more 3D feel without spending a year to learn how to use one of those programs and a week to make each image. Could anybody either offer me some advice or link me to some?
While I am asking stuff, kinda random, but is it okay to enlarge images? The version of Terragen I have onle makes images at 1200 by 960. I have been enlarging them with guilt after reading the "Snapshot Policy"...