1) Transparent textures and OpenGL
I am on the process of recreating a Roadrunner scene from the old Loony Tunes cartoons, where Wile-E-Coyote is chasing the Roadrunner down the street.
Here’s a snapshot example, where the viewport moves from right to left on the scenery as I watch the action.
So what I have done is created a simple scene with a road, I copied the color from the scene above, and right now I am working on a single cactus that repeats (like the old cartoons used to have repeating scenery).
Here’s where it currently stands:
SO I am having a problem with the ‘white area’ of the cactus texture not showing up transparently as it should. I found a pretty nifty article on the OpenGL site which does precisely what I am looking for it to do.
Specifically. IT takes a bitmap image:
And wraps it around an OpenGL standard teapot (a standard object in OpenGL’s library):
Which when combined, produces this cool effect, a holy teapot:
Unfortunately, they lacked source code, and only generally talked about how it was accomplished (as is typical).
So what I wound up doing was loading the cactus bitmap as a 24 bit texture, looking for the color white on this specific texture, and setting it’s alpha channel to 1.0 (indicating fully transparent). This SHOULD have produced the effect I was looking for. Here’s the critical code accomplishing this (In Visual Basic 6.0):
This code reads the cactus image as a 24 bit bitmap, and if I receive a transparent color selection (I can plug in any transparent color I want), then I’ll create a 32 bit copy of the image and create an alpha channel for it.
So this works. I should do a count of # of pixels that are set transparently to make sure the number’s in the ballpark I might be expecting for this image. Doing that now….
Yep. Looking about like I’d expect to. To err on the side of caution though, I’ll count the number of visible pixels.
So two pertinent areas to handle this data once it’s read into the byte array.
The first. this line generates an OpenGL texture based on the in memory buffer I hand it. The buffer I hand it either has an alpha channel (if it got processed by the prior logic) or not. This following line of code was crashing yesterday until I got the parameters right, so it appears like these lines are working as expected.
Notice how I change the alpha channel parameters for the second one?
And finally. I create the mipmaps, with the alpha channel selected.
So the problem as can be seen above is. The alpha channel; just ain’t working. I’ve enabled blend with glEnable, but there’s something else going on that I just ain’t figured out yet.
The goal is to have mountains in front of other mountains, cactuses in front of those, and roadrunner and Wile E in front of that on the road – and make it look exactly like the cartoon with different parts scrolling by faster than the others at different speeds. I’ll get there. But for now. No workie.
a) I got dynamic bitmap generation working, so now I can use this knowledge to generate textures and bitmaps ‘on the fly’ and create my own textures without loading them up from disk. This will come in handy for creating ‘noise’ on textures, especially if I get bump mapping working properly. Especially handy for noise generation for clouds and the like.
2) The ‘w’ parameter and glVertex 4f and Matrix manipulation with OpenGL
One of my primary goals with OpenGL is to get a ‘bigger on the inside’ look going.
Here’s an example using the TARDIS as inspiration:
See how the small box contains a much bigger interior? IT’s not like a computer, where from any angle the viewpoint shifts based on the angle i observe it from.
Accordingly, I have a couple personal rules I am using to create this effect:
- The effect has to occur on the model itself and be viewer/viewport independent.
- This effect has to be visible on the model without switching to projection mode.
The reason for these rules are simple: Let’s say I wanted to see something like this in real life. The ‘model’ I see of the real world is visible as if I am looking around using gluLookat and there’s no projection unless I’m watching a movie. So – so far – projection mode makes no sense to use – at all for this effect, particularly since I am the observer looking at the model.
So I’m trying to create the same effect inside the simulation. Where the observer is looking at a model.
Keeping it simple.
These rules have created a bit of a problem with the implementation, specifically, I can’t use clipping regions in the traditional way because those are projection specific.
And I can’t play with viewports.
Which has me digging into glVertex4f.
So the thought for this was simple: glVertex3f has an x,y, z coordinate in 3d space. I can use that to create a model in 3d space. But I got into asking – what is the ‘w’ coordinate for glVertex4f.
And oddly enough, OpenGL’s implementation of it has the ‘w’ coordinate being a scaling operation to the object.
And even odder – is that Microsoft’s OpenGL implementation has the glTranslatef function applying to 4 coordinate space, with the ‘w’ always set to 1.
Now I thought this was weird. there’s a scaling operation that can be used separately, glScalef, so why would they override the ‘w’ parameter like this, and why would they take the 4f ‘w’ and use that to scale when there’s a separate function to support this?
SO I got to thinking.
Hey. What if I ignore Microsoft’s functions, and use the w parameter as a dimensionality?
Put specifically. let’s say there’s a doorway between two different places, one is a doorway to the restroom from here in Starbuck’s, but let’s say that doorway leads to Microsoft’s Headquarters in Seattle and maybe Bill Gates’s office. Starbuck’s I could draw as ‘1’ on the ‘w’ and the office I could draw as ‘2’.
And somehow I define that doorway as being a transition point between the two and manage the render functions differently to create the desire effect?
I don’t know. I’m thinking out loud here. But I am leaning towards a function which works directly with the 4f matrix, and then seeing what kind of effect that has.
I already tried the scale operation, but the one thing I didn’t do was do a gltranslatef to the point of interest. But it’s not that hard to predict that if I create a model at a given point x,y,z – and from there scale it – whether it’s through the ‘w;’ parameter or a separate glscale call, the bigger on the inside simply ain’t gonna work like I want it to.
Especially using the glulookat command. let’s say I scale a box at 5,5,0, i’d have to shrink myself down once I reached the threshold of that location. That’s no different than clipping problems. I’m caught, either way, having to regard the observer as a projectionist.
Just say no to models that require knowledge of the location of the observer in order to correctly render.
FINALLY got to see what the ‘w’ command as interpreted by Microsoft was. Thanks, Stanley.
And THAT is why OpenGL is an independent standard and Microsoft’s implementation is Microsoft’s implementation. It’s no small wonder I never see any code use the ‘w’ parameter.
Regarding it as a dimension rather than a scale factor makes SO much more sense to me and will come in handy for creating doorways to other places in space and time,
Speaking of. There’s a phenomenal game that’s been idle for years called Miegakure, where the user can shift quickly between dimensions and solve puzzles that require knowledge of what’s across dimensions.
Now the game hasn’t seen any development progress for years. I have my suspicions why.
But with with a point x,y,z equal to 5 where ‘w’ = 1 as a dimension, you could have one landscape, and quickly and easily ‘remember’ your position in x,y,z space if you set w = ‘2’ and have a different model there.
I still have more experimenting to do with this. clearly.
3) Adobe Photoshop
Just a quick update on this.
For years I’ve been using Paintshop Pro X3, and lately X8, and between the lengthy load times and lack of experiential possibilities with this, I’ve begun learning to do what I was doing in Paintshop Pro in the more industrial Adobe Photoshop CC2015.
In the past I’d avoided Adobe because – well – it was too complex for the minimalist features I needed. But as I started doing the animation stuff, I found myself asking the question – how can I take a picture of me and make it look like a cartoon using one of these packages?
Adobe has quite a few addons that can do it, at a cost, but what I learned pretty quick after loading it is – these add ons tend to script a number of commands that are already available in the package itself.
Which provides incentive to learn how to use Adobe from the ground up.
So I started going through all the commands. And am very impressed. Not only are selections and color selections far more robust adding things like ‘sample color from a region’ when a single pixel sample looks horride, it also has support for 3d and movies and frame based editing in movies.
It loads faster than X8 does. And is just as easy to use if you know the commands well.
So I talked to a couple guys who do film in the area. Showed them some of my work in Paintshop Pro, and asked them for ‘handouts’ – work that I can get experience with in Photoshop.
One is more promising than the other. He knows what he’s doing and has a history in the industry.
So hopefully this could provide me some real world experience.
And maybe alms for the poor.
I fucking hate being homeless anymore. With a passion.