• My first 4K monitor, on Windows

    I just got a pair of 4K monitors - one for a Mac Mini, and one for Windows.

    The Mac is hooked up over HDMI and I use it purely for desktop applications. It works fine.

    But on Windows, I’ve encountered a surprising number of issues.

    Problem 1: No display during boot

    I connected the monitor using DisplayPort because it seemed most appropriate to my video card, a GeForce GTX 960. It has 3 DisplayPort ports, and only one HDMI port; and I didn’t know if the HDMI port supported 4K at 60 Hz (it does), but I knew the DisplayPort ports would do it.

    I swapped the monitor while the computer was on, and everything was fine… but when I rebooted, I had no display.

    FIX: It wasn’t super easy to find information about this but eventually I found a post that pointed me towards an NVIDIA firmware update tool for DisplayPort 1.3 and 1.4 displays that fixes the issue:

    Without the update, systems that are connected to a DisplayPort 1.3 / 1.4 monitor could experience blank screens on boot until the OS loads, or could experience a hang on boot.

    Problem 2: Euro Truck Simulator 2 stuck minimized

    UPDATE: Fixed: My Epson scanner software includes a tray icon. If I kill the process, this problem goes away. I guess it’s stealing focus when the resolution & scale change? Even though it isn’t actually showing a window? 🙄

    My video card can’t handle 4K resolutions at a reasonable framerate, so I’m running games at 1080p. Also, I often stream games to my living room TV using Steam, and it’s a 1080p TV so it fits better.

    When I launch Euro Truck Simulator 2, it immediately minimizes into the background, and any attempt to restore it brings it up for a brief moment but then it goes minimized again.

    It doesn’t happen if one of the following is true:

    • ETS2 is run at the desktop resolution—but at 4K it takes a severe framerate hit… or
    • Windows display scale is set to 100%—but at 27” 4K, 150% is far more usable. This is the workaround I’m using but I wish I didn’t have to!

    I don’t know if this is an ETS2 problem specifically, or a Windows problem. I assume other games will be affected too, but I’ve only tried Cities: Skylines and it has no such issue. ETS2 actually changes the desktop resolution for fullscreen, while Cities: Skylines uses a borderless mode that leaves the desktop resolution unchanged; this might explain the difference.

    Problem 3: 1080p not pixel perfect

    A 4K monitor can theoretically upscale 1080p using pixel doubling, where each 1080p pixel is displayed with four 4k pixels (doubled in both X and Y axes). I want this because it looks clear and perfect, as though I’m using a 1080p monitor…

    … but my particular monitor (LG 27UL550-W) doesn’t do this - it performs smoothing/interpolation of some sort on the upscale, and as a result it looks blurry.

    I feel that my GPU drivers should be able to render at 1080p but output at 4K, but if it can I haven’t found out how.

    UPDATE: Integer scaling is available in the NVIDIA control panel for Turing-architecture GPUs (GeForce 16xx, GeForce 20xx and up). I have a 960 so outta luck!!

    Problem 4: DisplayPort disconnects when monitor off

    When I turn off the monitor, the computer sees that as a disconnected display. This is a well-known hotplug detection feature.

    This isn’t really a problem when I’m sitting in front of it, but I like to stream games from the computer to my living room TV.

    When I do that, I want to turn off the locally-attached display. If I do, then games basically don’t work - they see no display connected and aren’t able to select a display resolution because there’s no display to switch on. So streaming just doesn’t work at all.

    Even if I’m not streaming, I prefer to have direct control over the power of my display, instead of having to use the display sleep timer to shut it off.

    WORKAROUND: Use HDMI, but long-term I’m probably just gonna have to live with this problem because I understand some features require DisplayPort, like FreeSync. Some monitors have an option to turn off while appearing connected to the computer, but mine doesn’t!

    Other thoughts

    These are all small-ish problems. Some of them have workarounds or whatever, but like, they’re all surprising issues that I feel shouldn’t happen at all. And I’ve only had the monitor for one day!

    Hardware

    • MSI NVIDIA GeForce GTX 960
    • MSI Z270-A Pro motherboard
    • Windows 10 up-to-date
    • NVIDIA drivers up-to-date

    My old monitor is a 2560x1440 panel connected over dual-link DVI. It exhibited none of the above problems, but I used it at 100% scale, at native resolution, and without DisplayPort.


  • Un-mangling some mangled unicode

    Recently I got some data from an external source that I’m to review and correct prior to use. One of the things I’ve been addressing is weird Unicode encoding stuff.

    For example:

    b'O\xc3\x82\xc2\x80\xc2\x99SAMPLA'
    

    Clearly this is supposed to have an apostrophe , but how on earth did it get turned into \xc3\x82\xc2\x80\xc2\x99?

    After poking at it with different coding systems for a while, I finally figured it out:

    # Mangle input
    ('’'
        .encode('utf-8')    # b'\xe2\x80\x99'
        .decode('latin-1')  # 'â\x80\x99'
        .upper()            # 'Â\x80\x99'
        .encode('utf-8')    # b'\xc3\x82\xc2\x80\xc2\x99'
    )
    

    I’ve never seen mangled Unicode get passed through .upper() before. I wasn’t around to see this data get created in the first place, but my guess is something like this happened:

    1. Software A accepted the input O’SAMPLA
    2. Software A exported the data using UTF-8 encoding
    3. Software B imported the data but incorrectly interpreted it using Latin-1 encoding
    4. Software B uppercased the data (typical for this software)
    5. Software B exported the data using UTF-8 encoding

    Here’s the reverse, to restore the original data:

    # Fix mangled input
    (b'O\xc3\x82\xc2\x80\xc2\x99SAMPLA'
        .decode('utf-8')    # 'OÂ\x80\x99SAMPLA'
        .lower()            # 'oâ\x80\x99sampla'
        .encode('latin-1')  # b'o\xe2\x80\x99sampla'
        .decode('utf-8')    # 'o’sampla'
        .upper()            # 'O’SAMPLA'
    )
    

    This works for this particular input because  needs to become â before the latin-1/utf-8 interpretation steps, but I don’t consider it appropriate to assume this will work for all inputs. Some inputs may not have been affected at all by upper(), and it would be incorrect to apply lower() to them.

    Unfortunately I can’t predict with total confidence whether applying lower() is appropriate for each input, so this data is gonna require manual review.


  • Hurdles to making a multitasking environment on the NES

    I’ve been thinking about what would be required to make a multitasking environment/platform on the NES.

    Requirements:

    • Can load applications on-demand as independent processes
    • Can launch multiple instances of each application
    • Uses cooperative multitasking

    Realistically you’ll want the cartridge to have some RAM and allow bank switching for both RAM and ROM in order to increase the memory & storage available to programs.

    • The 6502 has a single stack that’s fixed to live from $100 to $1ff. This is mapped to console RAM and can’t be bank-switched. Each process wants its own stack, so they’ll either have to share this very limited space, or you’ll have to swap the contents of the stack when switching tasks.
      • Compared to x86, where you can update SS to any segment & SP to any location within the segment.
    • Similarly, process memory stored in system RAM will need to be swapped out on task switch. Memory located above $4020 could be bank-switched instead.
    • Accessing data in different banks is desirable so we can jump to code in a currently-unloaded bank, or simply access data from one.
      • We can add code to perform this work and store it in a fixed bank that’s always available, and make the compiler use that instead of a plain JSR.
      • Pointers will need to include bank information as well.
      • Compared to x86, where you can jump to a different segment directly without losing access to the caller’s segment.
    • Graphics/PPU state also needs to be associated with each process.
      • It’s probably easiest to give the active process the full screen, instead of allowing background processes to share the display (eg. overlapping/tiled windows). The CHR ROM (or other video data) for a background application probably needs to get swapped out for the active process so it won’t be available to show properly, and there are challenges around sharing the current palette.
        • It might be possible to switch banks/contexts between scanlines, which could require windows to be screen-width but let them successfully stack vertically.
      • A system menu UI could be handled using code & data in reserved always-available banks, like the code we use to handle moves and jumps across banks.

  • Using setjmp/longjmp from JNA

    TLDR: I didn’t think it would work, and it didn’t.

    Today I had a goal to call an established C library from Java, but it uses setjmp and longjmp for error reporting.

    I had been hoping/planning to use Java Native Access to interact with the libraries. This is just a simple hobby project, so I want to keep it as simple as I realistically can. That means I don’t want to add a C build step to my project at all, not to mention having the build target multiple OS platforms and CPU architectures.

    But I didn’t really expect setjmp and longjmp to work in Java. I have no idea what the JVM does with the execution environment and I expected longjmp would interfere with it in a way that would very probably corrupt the JVM’s state.

    I tried it anyway. It didn’t work. The program crashed with SIGABRT after longjmp (running on Linux).


    I encountered some things I found a little more interesting than just “it doesn’t work”, though:

    jmp_buf’s size isn’t predictable

    setjmp requires that you allocate a jmp_buf to store the environment in.

    jmp_buf is defined in the system setjmp.h. On my 64-bit Linux system, sizeof(jmp_buf) == 200, and it’s defined as a 1-element array containing a struct, so it can be allocated easily then passed by reference.

    I dug into setjmp.h first to understand it more, and realized the size of jmp_buf isn’t really predictable:

    1. It varies by architecture even with the same C library, and
    2. It’s not specified by the standard that it even has to be a struct or anything. It could just be a handle or whatever.

    setjmp could be a macro

    The standard doesn’t specify whether setjmp is a function or macro. JNA can only call functions, since macros are inlined by the compiler at build time.

    (I didn’t check how it’s implemented in other C libraries, like MSVCRT on Windows or libSystem on macOS.)


    Not exactly related, but I also happened to call fflush(stdout) from Java. It turns out that stdout is actually specified in C89/C99 to be a macro. In glibc, it’s also exported as extern FILE *stdout so I was able to use that, but then my code would not conform to standard.


    I guess I’m gonna have to write a C adapter library that’s more Java-friendly.



subscribe via RSS