I have worked around some of the best programmers in the world.
If I could impart ANY lesson on future programmers that I have learned from my various mentors over the years – it would be this:
Don’t believe anyone who tells you something can’t be done in the way you want to do it.
Here’s an example:
Take the most simplistic program I can write in C:
So below, I wrote with Notepad++ a program called simple. and from the command line, I compiled it to make an executable that’s small but could be smaller with a few optimizations, and from that – the text “Hello My friends” is placed on the command line..
Now there’s a few things to note about this example.
First, it doesn’t fit your typical model for a C application main() loop, does it?
For those of you who aren’t aware, we’re taught in schools that all C style applications require this entry point:
But that’s actually NOT the case. I’ve experimented with several different entry point types, and the only caveat limitations I’ve found has been you just have to have a main.
But legacy console apps are boring, right?
So as I convert the application to make it Windows worthy, one look at the internet would suggest to me to believe the following code would be the absolute barebones code necessary to create a windows based application.
Now that’s a LOT of code, and I haven’t even put the message “Hello Friends” there yet. And if you try to compile it, you’ll get errors unless you specify the correct library.
Not exactly easy to work with, eh?
Here’s where it gets interesting (to me at least).
At its core, Windows has a message pump that acts in much the same way a postman might. When a message is triggered by the operating system, say for instance when you press the ‘Enter’ key on the keyboard, the operating system then places that message into the message pump as a “WM_KEYDOWN” message, which is then is delivered to all Windows based applications.
It’s this section of code, here which registers to listen to the message pump:
Knowing this helps reduce the application to the REAL barebone skeleton of code necessary to run a Windows based application
Here’ that code:
That’s it. Note the difference?
And to make it easier for the command line build, i added a pragma statement to inline let the compiler know what libraries to include:
Ok. Sure. This code may do absolutely nothing and doesn’t even display a window.
But it’s a building block of something more.
Now Windows, at a core level, treats every process’s primary thread of execution as a thread of execution and nothing more. So while I’m guaranteed to receive all messages for my window and process when I’m activated, I may also be receiving messages for other processes which, while interesting, can also degrade the performance of my application down dramatically because more time spent processing others messages that I’m invariably not going to use is less time on my own.
Herein lies chief reason #1 that modern Windows based video games cause such negative system performance.
The primary thread is a message pump and also creates the main window.
Making it a little bit more difficult for Windows to do basic things like context switching (which means switching between processes).
So let’s say I do something a little different than most code that’s written. And that is immediately, and I do mean immediately, return to Windows after the initial call is made to my app, I strip out the message pump within a message pump, and I change the code in subtle ways:
So now. Instead of a message pump. I create a thread.
The primary process has a single, simple call which now waits for that thread execution to return. Optimized to remove looping. In a synchronous blocked state until the primary thread has returned. And like magic. I now have a Windows friendly application which lets Windows be the primary message pump, and from there, I shift all my processing to my primary thread MyCreateWindow which – guess what it does?
So this time, rather than listening to ALL messages. I listen for messages destined for the window I just created. In much the same way the WaitForSingleObject works, the GetMessage will synchronously block and ONLY activate when a message for my window is received. If ANY other message is sent. I don’t care. In fact, it never is received by me. So my application now consumes zero CPU until it’s actually doing something.
And until then.
It’s just patiently listening and waiting.
Like a Ninja Assassin.
And the WndProc is your standard Windows Proc, this one really only listening for the WM_QUIT message which is received by the actual window when I press on the close button on the upper right hand corner of the window that’s created.
Once that message is received, GetMessage receives it, bDone is toggled which then triggers the break from the primary loop, and the thread exits, which WaitForSingleObject receives at the root level and will quit waiting, thus ending the execution of the primary thread.
Just a different way to do the primary loop.
What’s neat is I can also create multiple threads that should one exit, the entire application is shut down. AS a developer, I just work on synchronization issues and doin what I enjoy – besides womanizing and trying to teach my mind to bend space and time – which is anything but dissecting windows core mechanics.
With this said.
To the VM providers out there. You know. Java. Python. And to those doing development in 4GLs that are removed.
You may think it doesn’t behoove you to know what’s going on at the core level of your chosen language, and while I’d generally speaking agree with you, there may be points where your work inexplicably causes the entire system to slow down in performance.
Take Firefox for example. My favorite browser.
All it takes is one misbehaving plugin or web page, and the browser crashes in it’s entirety and everyone else is screwed.
Why does this happen? This can be traced back to the message pump and primary window creation in the primary thread. Since all browser windows are threads and effectively children threads of this process and someone crashes within that primary window, all other windows in a cascaded way crash.
In any case.
Graphics programming on Windows is currently limited to two primary engines – OpenGL and DirectX.
DirectX, in much the same way I listed above, is based on a single Window strategy and locks the developer into a large and proprietary and ever changing framework that’s expensive in both hardware demands and costs.
OpenGL, while it’s lightweight, in much the same way that DirectX does – forces the developer to think in a particular way when developing.
I’ve been playing with OpenGL a LOT over the last few years, but I’m finding myself just not happy with the way OpenGL works. Finding working examples for it are few and far between. When they are provided, it’s more a lesson in how the developer who produced it thinks. And when I go to ask questions on any forums, I understand why IT people in general are given such a bad rap, as I am seeing highly combative and condescending people who’d rather ask why I’m wanting to do something the way I am doing it and defeat that approach rather than attempt to help me out.
Weird how the industry and online communities have gotten like this. It wasn’t this way 10 years ago.
When I first began developing in GWBasic I could write directly to the video device through pokes in Basic.
Not long after that, I could write directly to the video device by creating a pointer and pointing it at memory address 0xA000 and then start writing bytes.
Here’s what it looks like to write to a display the size of my computer screen (1366×768 resolution) in C:
So I look online for assistance and guidance.
And everyone. And I do mean everyone. Is directed to use OpenGL or DirectX.
OpenGL and DirectX create surfaces I can write to in a similar fashion. But I’m actually wanting more control of my three dimensional objects than OpenGL provides.
Specifically, Both OpenGL and DirectX offers primitives like rectangles, polygons, lines, and more. But there’s limitations and ways these primitives function that isn’t like the real world, and chief among my goals with my programming is to create realistic visuals that mirror my real world.
And one problem I have is ‘the fill’.
Let’s say I create an object that’s solid in the real world – a plastic ball.
Now I can model that in DirectX or OpenGL, but that model is still drawn with two dimensional shapes, and the moment you open up that ball, it’s hollow.
Lighting with OpenGL is limited to a finite handful of light sources “For optimization purposes”.
Where my real world has – in this room alone – at a glance at least 30 different light sources, no including outside light sources which probably bumps that up to 80 light sources altogether which absolutely effect things on a real time basis.
And physics. Having a ball and separately having the physics just never has made real sense to me.
So what I am wanting is something that no other provider has – and something I’m interested in developing myself.
It’s a physics engine which models light as part of a physics based process, which just so happens to have one output method directed to a video card.
But not everyone’s going to see video, are they?
I still want this same exact model interactable without extra code for those tactile feedback devices.
Just because you can’t see it doesn’t mean it’s not there, right?
So the first step in this process seems obvious, right? Find ways to write direct to the video card.
OpenGL does it.
So does DirectX.
In any case. I’m interested in avoiding the standardized libraries for accessing these locations in memory, and accessing them directly myself and creating my own graphical libraries where I can model objects in more realistic ways than is currently possible with current software where light and physics are separated as two different things.
In part I’m hoping that talking (typing this) out loud will help me figure out a design approach that’s more practical.
My very real goal with this is high resolution real world quality real time rendered imagery on low end – inexpensive hardware.
I know I’m going to have to know the physics and math like the back of my hand to accomplish this. And since DirectX and OpenGL do much of this for me, I’m absolutely limited with optimization techniques as long as I’m constrained to using their methods for drawing.
In any case, here’s a list of what I have tried to draw directly to the screen:
Keeping in mind that there’s a memory mapped region which I am told is sent from the computer to the device 60 cycles per second…..
- I tried the old tried and true writing to the 0xA000 location in both a C console session and a windows administrator prioritized app. The C console session failed immediately with a hard exception (windows threw it with the attempt to access invalid memory address). The Windows based version did nothing. So I am questioning the memory location validity.
- Remembering a comment from an old guy I used to work with “It’s all data input/output”, and another time remembering opening named Pipes with a file open, I tried opening up a file pointer to the named video card and monitor in the file open leveraging the “\\Device” and a few other device names that it could have been assigned with. Failed, every time with fopen, and open’s not working at all in C. the fopen statenent itself fails.
So with an invalid memory address – I’d suspected the memory mapping may work differently. So I looked for documentation on the memory mapping, looked for utilities to show me where things might be mapped, but have been turning up gooseeggs with it all.
The evidence is simple: OpenGL and DirectX both do it. So I know it can be done.
But OpenGL is dependent on windows device context’s for the mapping, which is where I am going to have to dig into a little more to understand how that gets communicated to the physical device itself.
Relatively new territory for me to navigate in.
At least in Windows.