Derek wrote:If all you want is a command prompt, you can somehow put the system in a text mode (there might be a couple of these, I'm not sure), and then you write to the monitor by just changing bytes at a specific location. When I was writing an OS for class, there were two bytes per pixel, one for the character and one for the foreground and background colors. We didn't have to write code to get this mode though. Actually I think the system starts in this mode, because I know the bootloader can write to the screen too.
If you want simple graphics, there are some other modes you can put the system in. Again you draw to the screen by writing to bytes, but this time the bytes represent pixels. I think they're based on a palette (which I believe you can modify, but I don't know how). When I was first learning programming with QBASIC, I used screen 13 to write games. It's 320x200 with 256 colors, close to an even 64K of memory. I learned a little bit of DOS-based x86 assembly, including writing to this screen. I think getting into screen 13 was a syscall to DOS though, so I don't know how it works on a low level.
This is all my understanding of it anyways. For more advanced graphics, you'll have to talk to the video card.
Sc4Freak wrote:I wonder how many of those old-school trickses will even work any more, given that we're slowly but surely moving away from BIOS to UEFI.
LikwidCirkel wrote:It's common practice to have two or three buffers so that the user can work on the next buffer while one is currently rendering to the screen. These are blocking calls, so the program will be naturally throttled to process exactly one buffer at each refresh.
troyp wrote:Hmm...I did realize when I asked that the buffer might change as it was being read, but I was thinking it wouldn't be a problem. I think my implicit reasoning was that if half an X flashed on screen, it would only be for an instant anyway, so it would just be part of the "transition".
So maybe I'm overestimating the frequency of the reads?
troyp wrote:Thanks for the replies, guys. I have to admit, I'm still a bit confused about this stuff, like the screen tearing. I mean the two images that are combined in that Wikipedia picture aren't from successive "frames", are they? They seem way too different for that. So how does the discrepancy become so large?
gmalivuk wrote:Yes. And if wishes were horses, wishing wells would fill up very quickly with drowned horses.King Author wrote:If space (rather, distance) is an illusion, it'd be possible for one meta-me to experience both body's sensory inputs.
troyp wrote:Oh, well that makes a bit more sense. Is the fact that the shifted portion in the image occurs in the middle of the image (rather than the top or bottom) also just a liberty they've taken? That seemed odd as well (unless the "scanning" starts in the middle of the screen for some reason).
Users browsing this forum: No registered users and 2 guests