Windows drive letters are not limited to A-Z

(ryanliptak.com)

291 points | by LorenDB 7 hours ago

27 comments

  • notepad0x90 5 hours ago
    The NT paths are how the object manager refers to things. For example the registry hive HKEY_LOCAL_MACHINE is an alias for \Registry\Machine

    https://learn.microsoft.com/en-us/windows-hardware/drivers/k...

    In this way, NT is similar to Unix in that many things are just files part of one global VFS layout (the object manager name space).

    Paths that start with drive letters are called a "DOSPath" because they only exist for DOS compatibility. But unfortunately, even in kernel mode, different sub systems might still refer to a DOSPath.

    Powershell also exposes various things as "drives", pretty sure you could create your own custom drive as well for your custom app. For example, by default there is the 'hklm:\' drive path:

    https://learn.microsoft.com/en-us/powershell/scripting/sampl...

    Get-PSDrive/New-PSDrive

    You can't access certificates in linux/bash as a file path for example, but you can in powershell/windows.

    I highly recommend getting the NtObjectManager powershell module and exploring about:

    https://github.com/googleprojectzero/sandbox-attacksurface-a...

    ls NtObject:\

    • eloisant 2 hours ago
      It's baffling than after 30 years, Windows is still stuck in a weird directory naming structure inherited from the 80's that no longer make sense when nobody has floppy drives.
      • notepad0x90 2 hours ago
        I like being able to run games from early 2000s. Being able to write software that will still run longer after you're gone used to be a thing. But here we are with linux abandoning things like 'a.out'. Microsoft doesn't have the luxury to presume that it's users can recompile software, fork it, patch it,etc.. When your software doesn't work on the latest Windows, most people blame Microsoft not the software author.
        • simondotau 0 minutes ago
          I don’t like running games from the early 2000s outside of a sandbox of some description.

          While I understand the appeal of software longevity, I also think there is an oft-unspoken benefit in having unmaintained software less likely to function on modern operating systems. Especially right now, where the concept of serious personal computer security for normal consumers is only one, maybe two decades old.

        • Gud 32 minutes ago
          Ok, I prefer to use software which is future compatible, like ZFS, which is 128-bit.

          “The file system itself is 128 bit, allowing for 256 quadrillion zettabytes of storage. All metadata is allocated dynamically, so no need exists to preallocate inodes or otherwise limit the scalability of the file system when it is first created. All the algorithms have been written with scalability in mind. Directories can have up to 248 (256 trillion) entries, and no limit exists on the number of file systems or the number of files that can be contained within a file system.”

          https://docs.oracle.com/cd/E19253-01/819-5461/6n7ht6qth/inde...

          Don’t want to hit the quadrillion zettabyte limit..

      • BobbyTables2 1 hour ago
        Yeah, try explaining “drive C:” to a kid these days, and why it isn’t A: or B: …

        Of course software developers are still stuck with 80 column conventions even though we have 16x9 4K displays now… Didn’t that come from punchcards ???

        • strogonoff 17 minutes ago
          Come for punchcards, stay for legibility.

          80 characters per line is an odd convention in the sense that it originated from a technical limitation, but is in fact a rule of thumb perfectly familiar to any typesetting professional from long before personal computing became widespread.

          Remember newspapers? Laying the text out in columns[0] is not a random quirk or result of yet another technology limitation. It is the same reason a good blog layout sets a conservative maximum width for when it is read on a landscape oriented screen.

          The reason is that when each line is shorter, the entire thing becomes easier to read. Indeed, even accounting for legibility hit caused by hyphenation.

          Up to a point, of course. That point may differ depending on the medium and the nature of the material: newspapers, given they deal with solid plain text, limit a line to 40–60 characters; for programming it may be wider due to often longer “words” and other factors and conventions like syntax highlighting or indentation, and when dealing with particularly long identifiers (I’m looking at you, CNLabelContactRelationYoungerCousinMothersSiblingsDaughterOrFathersSistersDaughter) wider still.

          [0] Relatedly, codebases roughly following the 80 character line length limitation unlock more interesting columnar layouts in editors and multiplexers.

        • Sharlin 38 minutes ago
          It did, but 80 columns also pretty closely matches the 50ish em/70ish character paragraph width that’s usually recommended for readability. I myself wouldn’t go much higher than 100 columns with code.
        • ahoef 59 minutes ago
          While 80 characters is obviously quite short, my experience is that longer line lengths result in much less readable code. You have to try to be concise on shorter lines, with better phrasing.
      • leptons 2 hours ago
        Windows can still run software from the 80's, backwards compatibility has always been a selling point for Windows, so I'd call that a win.
        • anonymous_sorry 23 minutes ago
          It's very impressive indeed.

          Linux goal is only for code compatibility - which makes complete sense given the libre/open source origins. If the culture is one where you expect to have access to the source code for the software you depend on, why should the OS developers make the compromises needed to ensure you can still run a binary compiled decades ago?

        • AndrewDavis 1 hour ago
          Didn't Microsoft drop 16 bit application support in Windows 10? I remember being saddened by my exe of Jezzball I've carried from machine to machine no longer working.
          • mkup 1 hour ago
            Microsoft has dropped 16-bit application support via builtin emulator (NTVDM) from 64-bit builds of Windows, whether it happens to be Windows 10 or earlier version of Windows, depends on user (in my case, it was Windows Vista). However, you can still run 16-bit apps on 64-bit builds of Windows via third party emulators, such as DOSBox and NTVDMx64.
          • notepad0x90 30 minutes ago
            and Linux stopped supporting 32bit x86 I think around the same time? (just i386?)
        • chasing0entropy 2 hours ago
          My original VB6 apps (mostly) still run on win11
          • mananaysiempre 2 hours ago
            Hmm. IME VB6 is actually a particular pain point, because MDAC (a hodgepodge of Microsoft database-access thingies) does not install even on Windows 10, and a line-of-business VB6 app is very likely to need that. And of course you can’t run apps from the 1980s on Windows 11 natively, because it can no longer run 16-bit apps, whether DOS or Windows ones. (All 32-bit Windows apps are definitionally not from the 1980s, seeing as the Tom Miller’s sailboat trip that gave us Win32 only happened in 1990. And it’s not the absence of V86 mode that’s the problem—Windows NT for Alpha could run DOS apps, using a fatter NTVDM with an included emulator. It’s purely Microsoft’s lack of desire to continue supporting that use case.)
            • drxzcl 1 hour ago
              Wait, what's the story of the sailboat trip? My searches are coming up empty, but it sounds like a great story.
    • p_ing 2 hours ago
      PnP PowerShell also includes a PSDrive provider [0] so you can browse SharePoint Online as a drive. These aren't limited to local sources.

      [0] https://pnp.github.io/powershell/cmdlets/Connect-PnPOnline.h...

    • anthk 1 hour ago
      ReactOS has a graphical NT OBJ browser (maybe as a CLSID) where you can just open an Explorer window and look up the whole registry hierarchy and a lot more.

      It works under Windows too.

      Proof:

      https://winclassic.net/thread/1852/reactos-registry-ntobject...

    • delusional 3 hours ago
      > You can't access certificates in linux/bash as a file path for example, but you can in powershell/windows.

      I don't understand what you mean by this. I can access them "as a file" because they are in fact just files

          $ ls /etc/ca-certificates/extracted/cadir | tail -n 5
          UCA_Global_G2_Root.pem
          USERTrust_ECC_Certification_Authority.pem
          USERTrust_RSA_Certification_Authority.pem
          vTrus_ECC_Root_CA.pem
          vTrus_Root_CA.pem
      • notepad0x90 3 hours ago
        You can access files that contain certificate information (on any OS), but you can't access individual certificates as their own object. In your output, you're listing files that may or may not contain valid certificate information.

        The difference is similar to being able to do 'ls /usr/bin/ls' vs 'ls /proc/12345/...' , the first is a literal file listing, the second is a way to access/manipulate the ls process (supposedly pid 12345). In windows, certificates are not just files but parsed/processed/validated usage specific objects. The same applies on Linux but it is up to openssl, gnutls,etc... to make sense of that information. If openssl/gnutls had a VFS mount for their view of the certificates on the system (and GPG!!) that would be similar to cert:\ in powershell.

      • jeroenhd 2 hours ago
        Linux lacks a lot of APIs other operating systems have and certificate management is one of them.

        A Linux equivalent of listing certificates through the Windows virtual file system would be something like listing /proc/self/tls/certificates (which doesn't actually exist, of course, because Linux has decided that stuff like that is the user's problem to set up and not an OS API).

      • kadoban 3 hours ago
        I _suspect_ they mean that certs imported into MMC in Windows can be accessed at magic paths, but...yeah linux can do that because it skips the step of making a magical holding area for certs.
        • notepad0x90 2 hours ago
          there are magical holding areas in Linux as well, but that detail is up to TLS libraries like openssl at run-time, and hidden away from their clients. There are a myriad of ways to manage just ca certs, gnutls may not use openssl's paths, and each distro has its own idea of where the certs go. The ideal unix-y way (that windows/powershell gets) would be to mount a virtual volume for certificates where users and client apps alike can view/manipulate certificate information. If you've tried to get a internal certs working with different Linux distros/deployments you might be familiar with the headache (but a minor one I'll admit).

          Not for certs specifically (that I know of) but Plan9 and it's derivaties are very hard on making everything VFS abstracted. Of course /proc , /sys and others are awesome, but there are still things that need their own FS view but are relegated to just 'files'. Like ~/.cache ~/.config and all the xdg standards. I get it, it's a standardized path and all, but what's being abstracted is here is not "data in a file" but "cache" and "configuration" (more specific), it should still be in a VFS path, but it shouldn't be a file that is exposed but an abstraction of "configuration settings" or "cache entries" backed by whatever thing you want (e.g.: redis, sqlite, s3,etc..). The windows registry (configuration manager is the real name btw) does a good job of abstracting configurations, but obviously you can't pick and choose the back-end implementation like you potentially could in Linux.

          • jeroenhd 2 hours ago
            > The windows registry (configuration manager is the real name btw) does a good job of abstracting configurations, but obviously you can't pick and choose the back-end implementation like you potentially could in Linux.

            In theory, this is what dbus is doing, but through APIs rather than arbitrary path-key-value triplets. You can run your secret manager of choice and as long as it responds to the DBUS API calls correctly, the calling application doesn't know who's managing the secrets for you. Same goes for sound, display config, and the Bluetooth API, although some are "branded" so they're not quite interchangeable as they might change on a whim.

            Gnome's dconf system looks a lot like the Windows registry and thanks to the capability to add documentation directly to keys, it's also a lot easier to actually use if you're trying to configure a system.

  • noinsight 6 hours ago
    Windows is not limited to accessing partitions through drive letters either, it's just the existing convention.

    You can mount partitions under directories just like you can in Linux/Unix.

    PowerShell has Add-PartitionAccessPath for this:

    > mkdir C:\Disk

    > Add-PartitionAccessPath -DiskNumber 1 -PartitionNumber 2 -AccessPath "C:\Disk"

    > ls C:\Disk

    It will persist through reboots too.

    • jeroenhd 2 hours ago
      I've used this a few times to put games on exchangeable media. Installers don't like it if you pick an SD card as an install target, but they don't care if C:\Games\Whatever is actually an NTFS mount point that goes unpopulated as soon as I disconnect the memory card. This trick has the downside of confusing installers that try to check free space, though.

      For permanently mounted drives, I'd pick symbolic links over mount points because this lets you do file system maintenance and such much easier on a per-drive level. You can still keep everything under C:\ and treat it like a weird / on Unix, but it you need to defragment your backup hard drive you won't need to beat the partition manager into submission to make the defragment button show up for your mounted path.

    • magicalhippo 4 hours ago
      Don't have to use PowerShell either, it's been available for ages through Disk Management. Right-click on a partition -> Change Drive Letter and Path -> Add -> Mount in following empty NCTS folder.
    • zamadatix 5 hours ago
      Only for NTFS (both source and dest) though, no exFAT shared drives under a folder mount or what have you. I think the same is actually true of ReFS for some reason.

      When you create/format the partition in the GUI tools it'll actually ask if you want to assign a drive letter or mount as a path as well.

      • chungy 5 hours ago
        I just tried mounting a exFAT partition at "C:\exFAT" and it worked just fine.
        • Filligree 5 hours ago
          Other way around. Try mounting E: in your exfat drive.
          • p_l 3 hours ago
            That's because some filesystems like NTFS expose necessary metadata for integration and some don't. FAT and exFAT do not.
      • p_ing 2 hours ago
        RAW partitions can be mounted at a mount point (or drive letter).

        Used to be able to use these with SQL Server.... 2000.

    • EvanAnderson 4 hours ago
      NTFS mount points can be very handy for engineering around software that doesn't allow you to customize paths. I can choose VM disks with different performance or replication policies and stitch them together like I would on a *nix OS. It's very handy and only in rare occasions have I had applications "notice" it and balk.
      • jasomill 25 minutes ago
        Symlinks also work on NTFS, though mount points have the advantage of not having a canonical path that might be unintentionally resolved and persisted.
    • mschuster91 5 hours ago
      What, excuse me, the fuck? I never knew one could do this. Thanks!
      • nolok 2 hours ago
        It's even available in the regular UI, open "computer management" go to the disk section and many of the 'magic' things about drives in windows world are just UI toggles
      • korhojoa 3 hours ago
        Back when Windows 2000 was the new thing, I used to put "Program Files" on another disk with this. Starting programs became faster too, as things loaded both from the OS drive and the drive where the programs were installed.
  • thrtythreeforty 6 hours ago
    The cursedness of "€:\" is awesome. It's amazing how much more flexible the NT kernel is vs what's exposed to the user.
    • jeffbee 4 hours ago
      Yeah only the DOS façade of Windows NT is well known. Under that skin lurks some pretty wild late-1980s concepts. One of the core things to understand is that a lot of the features are based on a reverse map of GUIDs to various actions, and resolution of these map entries pervades the UI. That's why you can put {hexspew} as the name of a shortcut on the Windows desktop and have it magically become a deep link to some feature that Windows doesn't otherwise let you create a shortcut to, and also why you can just add things to the control panel which doesn't seem like it would be an intentional feature. And these actions can be named symbols inside DLLs, so they can do literally anything the OS is capable of doing. This is also why Windows has always been ground zero for malware.
      • Wonkey 1 hour ago
        That sounds fun. Do you have a link or and example “hexspew”
      • pixl97 3 hours ago
        >so they can do literally anything the OS is capable of doing

        Yea, over the years someone thought of something they wanted to do and then did it without a systematic consideration of what that level of power meant, especially as multi-user network connectivity and untrusted data became the norm.

        • p_ing 2 hours ago
          Those weren't a consideration when the NT OS/2 Design Workbook was being written.
      • sedatk 2 hours ago
        Those GUIDs aren't related to NT kernel but Windows Explorer and its COM-based component system. They were introduced with Windows 95, IIRC.
    • Dwedit 3 hours ago
      Very cursed, and the drive letter won't even be accessible under certain codepages.
      • jeroenhd 2 hours ago
        As far as I can tell, the drive will still be accessible, it'll just require the character equivalent to € on the other code page as a drive letter.

        As long as your code page doesn't have gaps, that should be doable. It'll definitely confuse the hell out of anyone who doesn't know about this setup, though!

  • RobotToaster 6 hours ago
    > Drives with a drive-letter other than A-Z do not appear in File Explorer, and cannot be navigated to in File Explorer.

    Well there goes my plan to replace all my drive letters with emojis :(

    • mananaysiempre 5 hours ago
      You would be limited to a fairly small subset of emojis, anyway: many (most?) of them are outside of the BMP so don’t fit into a single UTF-16 code unit, and some of the remaining ones are ordinary characters followed by an emoji style selector (U+FE0F), which doesn’t fit either.
    • jeroenhd 2 hours ago
      With the right code pages, you should be able to find a few smiley faces.

      For everything else, the best advice I can offer is that you can put your own autorun config file on the root of a drive to point the drive icon to a different resource. Though the path will stay boring, the GUI will show emoji everywhere, especially if you also enter emoji in the drive label.

    • bikson 2 hours ago
      But your computer name can be emoji.
  • vunderba 5 hours ago
    From the article:

    > Drives with a drive-letter other than A-Z do not appear in File Explorer, and cannot be navigated to in File Explorer.

    Reminds me of the old-school ALT + 255 trick on Win9x machines where adding this "illegal trailing character" made the directory inaccessible from the regular file explorer.

    • Telemakhos 5 hours ago
      Shhh… that’s how we hid the Duke Nukem installs on the boxen in the dorm computer lab.
    • Someone1234 4 hours ago
      Up until recently, you could do the same thing in the Windows Registry to make it so normal Windows tools (e.g. Regedit) couldn't view/modify certain entries. I believe it was still an issue in the last five~ years.
  • ddtaylor 2 hours ago
    For anyone curious there is a somewhat similar thing in Linux called Abstract Domain Sockets. These are Unix domain sockets where the first character is NUL ('\0')

    I am working on a game where every player has system resources on a Linux computer. The basic idea is that some resources need to be shared or protected in some ways, such as files, but the core communication of the game client itself needs to be preserved without getting in the way of the real system environment.

    I am using these abstract data sockets because they sidestep most other permissions in Linux. If you have the magic numbers to find the socket, you get access.

  • the_mitsuhiko 6 hours ago
    > In other words, since RtlDosPathNameToNtPathName_U converts C:\foo to \??\C:\foo, then an object named C: will behave like a drive letter. To give an example of what I mean by that: in an alternate universe, RtlDosPathNameToNtPathName_U could convert the path FOO:\bar to \??\FOO:\bar and then FOO: could behave like a drive letter.

    For some reason I remember that the original xbox 360 had "drive letters" which were entire strings. Unfortunately I no longer have access to the developer docs and now I wonder if my mind completely made this up. I think it was something like "Game:\foo" and "Hdd0:\foo".

  • azalemeth 6 hours ago
    This all sounds like a wonderful way to write some truly annoying malware. I expect to see hidden mounts on SQL-escape-type-maliciously-named drives soon...
    • Someone1234 4 hours ago
      I understand your point; but I'm struggling to see how this could be weaponized. Keep in mind, that these Dos compatible drive letters need to map to a real NT path endpoint (e.g. a drive/volume); so it isn't clear how the malware could both have a difficult to scan Dos tree while also not exposing that same area elsewhere for trivial scanning.
      • rwmj 3 hours ago
        I'm betting there's some badly written AV software out there which will crash on non-standard drive letters, allowing at least a bit of mayhem.
      • avidiax 1 hour ago
        Not sure if it is natively supported, but the malware can just decrypt a disk image to RAM and create a RAM disk mounted to +. Or it can maybe have a user space driver for a loop device, so the sectors of the drive are only decrypted on the fly.

        It would likely break a lot of analysis tools and just generally make things very difficult.

      • buzer 3 hours ago
        The recovery partition might work if it exists.
    • ahoka 4 hours ago
      Wait until your learn about Alternate Data Streams…
      • p_ing 2 hours ago
        They had their use when running Services for Macintosh.
        • jeroenhd 2 hours ago
          They're still actively used to apply the Mark of the Web to indicate a file has been downloaded from an untrusted zone and should be handled with caution. I believe macOS also applies similar metadata.

          There are a few other places where they also show up, but the MotW is the most prevalent one I've found. Most antivirus programs will warn you for unusual alternate data streams regardless of what they contain.

      • boston_clone 2 hours ago
        Decent writeup from CS with that evasion method described -

        https://www.crowdstrike.com/en-us/blog/anatomy-of-alpha-spid...

    • hulitu 4 hours ago
      > This all sounds like a wonderful way to write some truly annoying malware.

      AFAIK you need admin priviledges to play with drives in Windows.

  • Tanoc 6 hours ago
    Anybody who's had to look through files on multi-disc arrays knows exactly how weird the drive letters can get. Mount the ISOs of thirty six 8.5GB DVDs because someone thought it was a good idea to split zip a single archive into 7.99GB segments and things get very tricky in cmd. If you weren't in the habit of using several layers of quotation marks to separate everything you'll form it very quickly because the operators can be the same symbols as the drive letters, as shown in the article with the "+" example.
  • joquarky 16 minutes ago
    I miss the 'assign' feature on the Amiga.
  • WarOnPrivacy 3 hours ago
    In my first DOS, the drive letter after Z was AA. I created a series of small RAM drives to find out.

    That may have been DOS 3.3, not later. IDK when it changed.

  • rwmj 4 hours ago
    This is an interesting reference about how drive letters are stored in the Windows Registry: http://www.goodells.net/multiboot/partsigs.shtml

    I never tried, but I wonder if you could use direct registry editing to create some really strange drive letters.

  • arcfour 6 hours ago
    Hmm. This seems like it could be abused rather hilariously (or not, depending on your perspective) by malware...
    • Loughla 5 hours ago
      If the malware that exploits my machine also runs off the eggplant emoji drive, I'm becoming Amish.
  • layer8 2 hours ago
    > Non-ASCII drive letters are even case-insensitive like A-Z are

    I wonder, does `subst I: .` create i: or ı: under the Turkish locale?

  • layer8 2 hours ago
    > drive letters are essentially just a convention borne out of the conversion of a Win32 path into a NT path

    CMD also has the concept of a current drive, and of a per-drive current directory. (While “X:\” references the root directory of drive X, “X:” references whatever the current directory of drive X is. And the current directory, i.e. “.”, is the current directory of the current drive.) I wonder how those mesh with non-standard drive letters.

    • squeek502 2 hours ago
      They work just fine, as the drive-specific CWD is stored in the environment as a normally-hidden =<drive-letter>: environment variable which has all the same WTF-16 and case-insensitive properties as drive letters:

          C:\> cd /D λ:\
      
          λ:\> cd bar
      
          λ:\bar> cd /D C:\
      
          C:\> echo %=Λ:%
          λ:\bar
      
          C:\> cd /D Λ:
      
          λ:\bar>
  • xori 1 hour ago
    The real question is can Windows defender scan these drives?
    • jasomill 9 minutes ago
      I don't know what it scans by default, but it can custom scan mounted volumes with no visible mount points assigned at all, e.g., my EFI partition containing a copy of the EICAR test file[1]:

        PS C:\Users\jtm> & 'C:\Program Files\Windows Defender\MpCmdRun.exe' -Scan -ScanType 3 -File '\\?\Volume{91ada2dc-bb55-4d7d-aee5-df40f3cfa155}\'
        Scan starting...
        Scan finished.
        Scanning \\?\Volume{91ada2dc-bb55-4d7d-aee5-df40f3cfa155}\ found 1 threats.
        Cleaning started...
        Cleaning finished.
      
      [1] https://www.eicar.org/download-anti-malware-testfile/
  • robocat 3 hours ago
    Similar corner cases are the bedrock of security flaws.

    If anyone adds this behaviour as a bet on a market about a future CVE or severity, can they add a link to the bet here?

  • WalterBright 1 hour ago
    26 drives should be enough for anyone.
  • ddtaylor 2 hours ago
    I never knew Λ was the upper case version of λ.
  • nunobrito 6 hours ago
    This was a cool article. Learned something new today.
  • theandrewbailey 4 hours ago
    This topic would make a good post on The Old New Thing.
  • pdntspa 3 hours ago
    Seems like a great way to hide a bunch of files from users for a malware payload
  • kijin 6 hours ago
    I remember when A and B were commonly used drive letters. C was a luxury. D was outright bourgeois.

    But for some reason, drive letters starting with C feel completely natural, too. Maybe it's because C is also the first note in the most widely known musical scale. We can totally afford to waste two drive letters at the start, right?

    • urbandw311er 6 hours ago
      Oh bless you and your youngsterness. A and B, by convention, were reserved for floppy drives and C was typically the first hard drive.
      • keitmo 5 hours ago
        On systems with a single floppy, drives A: and B: were two logical drives mapped to the same physical drive. This enabled you to (tediously) copy files from one diskette to another.
      • HPsquared 5 hours ago
        Hard drives were a luxury.
        • prerok 3 hours ago
          While original IBM PCs indeed may not have had HDDs, it did become a standard for PC XT, as early as 1983. Only the cheapest version were without a HDD by the end of the 1980s.
          • actionfromafar 3 hours ago
            Many clones came without a HDD.
            • prerok 3 hours ago
              Sure, I can imagine that.

              My first contact with PCs was in 1988 and they all had HDDs and were definitely not "IBM PC" but clones. That said, that's just my experience so YMMV.

              • pdonis 2 hours ago
                My first PC, bought in late 1986, was a Leading Edge Model D, with two 360K floppy drives and no hard drive. I wrote a script to put COMMAND.COM and some other key files on a RAM disk on boot so I didn't have to keep the DOS floppy in the A: drive all the time. IIRC they had come out with a model that had a 20 MB hard drive but it was more than I could afford.

                MIT, where I was at school then, had some IBM PC XTs with 10 MB hard drives, but most of their computer resources were time-sharing DEC VAX machines. You could go to one of several computer labs to get on a terminal, or even dial into them--I did the latter from my PC (the one above) using a 2400 baud modem, which was fast for the time.

          • layer8 2 hours ago
            By the end of the 1980s, a lot of years had passed, and you’d buy an AT instead of an XT.
      • nopechief 2 hours ago
        [dead]
    • euroderf 5 hours ago
      D was typically a CD-ROM drive. So when CD-ROMs went the way of the dinosaurs, where did D go ? Is it always some kind of SYS drive nowadays ?
      • tom_ 5 hours ago
        It's just whatever happens to end up there? That's why D was typically the CD-ROM: A was the first floppy drive, B the (typically absent) second floppy drive, C the only hard disk, and then D was the next free letter.

        On my laptop, D is the SD card slot. On my desktop, it's the 2nd SSD.

        • xoxxala 2 hours ago
          When recordable CDs were brand new, we set up a station at work with two hard drives (C: and D:) and the CD burner (E:). Naturally, the CDR burning software was hard-coded for D: but didn't mention that anywhere (including the error message). Took us a few hours to figure it out.
        • hilbert42 5 hours ago
          "That's why D was typically the CD-ROM:"

          We used to set our machines so the CD-ROM was always drive L. This way we always had 'room' to add HDs so there was no gap in the alphabetical sequence. Drive D - data drive, E - swapfile, etc.

          Test and external drives (being temporary) were assigned letters further down than L. Sticking reasonably rigidly to this nomenclature avoided stuff-up such as cloning an empty drive onto one with data on it (cloning was a frequent activity).

          Incidentally, this rule applied to all machines, a laptop with HD would have C drive and L as the CD-ROM. Machines with multiple CD-ROMs would be assigned L, M and so on.

      • Kwpolska 1 hour ago
        Depends on your setup. These days, I have a D drive for sharing data with the Linux install I never use. I used to have a D drive for user data (to keep them safe when reinstalling Windows) back in the 9x/XP days (and my CD drive was E).

        I also use the drive letter assignment feature, so my external USB drive is always drive X.

      • tetha 4 hours ago
        On servers, D is commonly used to push data / vendor installations / other stuff you may want to backup separate from the OS off of the main OS drive C.
      • rzzzt 4 hours ago
        C: is the boot partition with the DoubleSpace driver, D: is the compressed volume.
        • lepicz 3 hours ago
          Stacker compressed volume ;)
        • badc0ffee 2 hours ago
          DriveSpace, surely
      • kijin 5 hours ago
        D usually refers to the second internal storage device these days. Either a second SSD, a large HDD, or an extra partition in your system disk. If you don't have any of those, a USB stick might get the D drive temporarily.
  • rado 6 hours ago
    Windows drive letters are ridiculous. Use an external drive for e.g. video editing, its letter can be stolen by another drive, you can’t work anymore.
    • Arainach 5 hours ago
      Not while it's mounted. This is akin to complaining that on Linux if you unplug a flash drive and plug in a different one that second drive could "steal" /mnt/sdb1 or whatever.
      • Filligree 5 hours ago
        People did complain about that, which is why on Linux today that mount would use the disk UUID or label instead.

        So it’s fixed. What’s windows’ excuse? :-)

        • ChrisSD 5 hours ago
          Windows also has uuids. E.g.:

              \\.\Volume{3558506b-6ae4-11eb-8698-806e6f6e6963}\
          • Someone1234 4 hours ago
            Which can be trivially mapped to directories for aliasing. Just like Linux.

            Windows NT and UNIX are much more similar than many people realize; Windows NT just has a giant pile of Dos/Win9x compatibility baked on top hiding how great the core kernel design actually is.

            I think this article demonstrates that very well.

            • jug 1 hour ago
              Yeah, NTFS is quite capable. I mostly blame the Windows UI for being a bit too dumbed down and not advertising the capabilities well.
      • hulitu 4 hours ago
        Linux is broken from this point of view. Inserting an USB drive before boot breaks booting .
        • Xiol 4 hours ago
          Certainly doesn't for me. Skill issue.
          • dpark 2 hours ago
            “Works on my machine” is rarely a helpful response. Doubling down with the “skill issue” insult makes it rude in addition to being unhelpful.

            Two other people were able to concisely explain the problem instead of being rude and condescending.

        • oasisaimlessly 4 hours ago
          Only if you have a broken kernel cmdline or fstab that references /dev/sd* instead of using the UUID=xyz or /dev/disk/by-id/xyz syntax.
        • lutusp 3 hours ago
          > [ .. ] Inserting an USB drive before boot breaks booting.

          Only if the machine's BIOS is configured to give bootable USB devices boot-order priority. So it's not about Linux -- in fact, the same thing would happen on a Windows machine.

          Remember that in a properly configured Linux install, the boot partition is identified by UUID, not hardware identifier (in /etc/fstab). Consequently if you change a drive's hardware connection point, the system still boots.

    • TazeTSchnitzel 5 hours ago
      You can fix the drive letter assignments at any time if they become a problem, or use a directory as a mount point if that's less troublesome. (Win-R, diskmgmt.msc)
    • Kwpolska 1 hour ago
      If you go with the defaults, they might be. But if you manually define the letter for your external drive, it will keep it forever. (I have my external drive set to X. I’m not sure if Windows would respect that assignment if I had plugged in 19 other drives, but that is never going to happen.)
    • avhception 5 hours ago
      I remember vividly when a user couldn't access his smb drive from Windows because both his printer and also the computer's case came with one of these multi-cardreaders with n slots and the drive letters collided. That's when I learned that smb drive letters don't even come from the "global" pool of drive letters, because, and this is obvious in hindsight, they are a per-user affair (credentials and all that).

      I think the concept of drive letters is flawed.

      • mrweasel 5 hours ago
        Even Microsoft appears to agree with you, given that drive letters are symlinks. It's basically legacy, there's just no plan or reasonable path forward that will remove them.
      • p_ing 2 hours ago
        I always tried to point people to DFS w/ the FQDN path. We added a shortcut to the user's desktop that pointed to their home folder on the DFS namespace.
    • p_l 3 hours ago
      Only if the actual "drive letter" assigned to the drive is the special value for "auto".

      Otherwise, the drive letter is allocated statically and won't be used by another volume.

    • leptons 2 hours ago
      You can't work anymore only if you are incurious and unable to google a simple solution - assign a different drive letter with the disk management program.
  • perlgeek 5 hours ago
    Now somebody will uses this to hide their malware, somehow...
  • lutusp 4 hours ago
    I hope this article gets archived in a computer history, so people in the future can read how today's default operating system persisted in requiring its vict..., umm, users, to honor an archaic practice long past any imaginable justification, while free alternative operating systems don't have this handicap.

    I regularly have this conversation with my end-user neighbor -- I explain that he has once again written his backup archive onto his original because he plugged in his Windows USB drives in the wrong sequence. His reply is, more or less, "Are computers still that backward?" "No," I reply, "Windows is still that backward."

    The good news is that Linux is more sophisticated. The bad news is that Linux users must be more sophisticated as well. But this won't always be true.

    • rwmj 3 hours ago
      Are Linux /dev device paths (originating from Unix) really much better? They're a pretty odd feature if you think about it. "Everything is a file", except only certain things can be files and at least by convention they only appear under /dev. Plan 9 takes the everything is a file concept to its logical conclusion and is much better designed.

      Edit: Also /dev/sdX paths in Linux are not stable. They can and do vary across boot, since Linux 5.6.

      • lutusp 3 hours ago
        > Are Linux /dev device paths (originating from Unix) really much better?

        Not better at all, which is why Linux uses partition UUIDs to identify specific storage partitions, regardless of hardware identifiers. This isn't automatic, the user must make it happen, which explains why Linux users need to know more than Windows users (and why Linux adoption is stalled).

        > Edit: Also /dev/sdX paths in Linux are not stable. They can and do vary across boot, since Linux 5.6.

        Yes, true, another reason to use partition UUIDs.

        > Plan 9 takes the everything is a file concept to its logical conclusion and is much better designed.

        It's a shame that Plan 9 didn't get traction -- too far ahead of its time I guess.

        • hakfoo 2 hours ago
          I always saw it as two different mindsets for data storage.

          One vision is "medium-centric". You might want paths to always be consistently relative to a specific floppy disc regardless of what drive it's in, or a specific Seagate Barracuda no matter which SATA socket it was wired to.

          Conversely it might make more sense to think about things in a "slot-centric" manner. The left hand floppy is drive A no matter what's in it. The third SATA socket is /dev/sdc regardless of how many drives you connected and in what order.

          Either works as long as it's consistent. Every so often my secondary SSD swaps between /dev/nvme0 and /dev/nvme1 and it's annoying.

          • ElectricalUnion 36 minutes ago
            And the sad thing is that stuff directly in `/dev` isn't neither, it's just "first come first served" order, that is more or less guaranteed to be non-deterministic BS. One is supposed to use udev /dev/disk/by-path/ subtree if one really wants "slot-centric" connections.
        • dist-epoch 2 hours ago
          Windows drive letters are also linked to some partition UUIDs, which is why you can move a partition to a different drive, or move drive to a different address (change SATA/m.2 port)

          You can use mountvol command to see the mount-letter/GUID mapping.

    • stockresearcher 3 hours ago
      This has (more or less) been covered before!

      https://news.ycombinator.com/item?id=17652502

      VMS expects to be run as a cluster of machines with a single drive system. How that actually happens is “hidden” from user view, and what you see are “logicals”, which can be stacked on top of each other and otherwise manipulated by a user/process without affecting the underlying file system. The results can be insane in the hands of inexperienced folks. But that is where NT came from.

      • lutusp 3 hours ago
        All true, all good points. Some day partitions and their unique UUIDs will be the sole valid identifiers. Then end users will have to be warned not to copy entire partitions including their (no longer unique) UUID. Sounds bizarre but I've had that exact conversation.