I've been messing around with GitLab as a self hosted alternative for a few years. I do like it, but it is resource intensive!
For the past few days I've been playing with Forgejo (from the Codeberg people). It is fantastic.
The biggest difference is memory usage. GitLab is Ruby on Rails and over a dozen services (gitlab itself, then nginx, postgrest, prometheus, etc). Forgejo is written in go and is a single binary.
I have been running GitLab for several years (for my own personal use only!) and it regularly slowly starts to use up the entirety of the RAM on a 16GB VM. I have only been playing with Forgejo for a few days, but I am using only 300MB of the 8 GB of RAM I allocated, and that machine is running both the server and a runner (it is idle but...).
I'm really excited about Forgejo and dumping GitLab. The biggest difference I can see if that Forgejo does not have GraphQL support, but the REST API seems, at first glance, to be fine.
EDIT: I don't really understand the difference between gitea and forgejo. Can anyone explain? I see lots of directories inside the forgejo volume when I run using podman that clearly indicate they are the same under the hood in many ways.
Our product studio with currently around 50 users who need daily git access moved to a self hosted forgejo nearly 2 years ago.
I really can’t overstate the positive effects of this transition. Forgejo is a really straightforward Go service with very manageable mental model for storage and config. It’s been easy and cheap to host and maintain, our team has contributed multiple bugfixes and improvements and we’ve built a lot of internal tooling around forgejo which otherwise would’ve required a much more elaborate (and slow) integration with GitHub.
Our main instance is hosted on premise, so even in the extremely rare event of our internet connection going offline, our development and CI workflows remain unaffected (Forgejo is also a registry/store for most package managers so we also cache our dependencies and docker images).
Just run podman or docker login your.forgejo.instance.address then push to it as normal. An existing repo must exist. You can check the images under site administration -> packages.
Speaking of authentication it also works as an openid provider meaning you can authenticate every other web software that supports it to Forgejo... which in turn can look for users in other sources.
It also has wikis.
Its an underrated piece of software that uses a ridiculous small amount of computer resources.
That's so brilliant. Wow. I'm struggling to wrap my brain around how they not only support OCI (docker) but also APK (alpine) and APT (debian) packages. That's a very cool feature.
Ease of maintenance is an even bigger difference. We've been using gitea for a bit over five years now, and gitlab for a few years before that, and gitea requires no maintenance in comparison. Upgrades come down to pulling the new version and restarting the daemon, and take just a few seconds. It's definitely the best solution for self-hosters who want to spend as little time as possible on their infrastructure.
Backups are handled by zfs snapshots (like every other server).
We've also had at least 10× lower downtime compared to github over the same period of time, and whatever downtime we had was planned and always in the middle of the night. Always funny reading claims here that github has much better uptime than anything self-hosted from people who don't know any better. I usually don't even bother responding anymore.
I guess I'll just chime in that while Gitlab is a very heavy beast, I have self hosted it for over a decade with little to no issues. It's pretty much as simple as installing their Omnibus package repository and doing apt install gitlab-ce.
When I self hosted gitlab I never found the maintenance to be that bad, just change a version in a compose.yml, sometimes having to jump between blessed versions if I've missed a few back to back.
Like others, I've switch to Gitea, but whenever I do visit gitlab I can't help but think the design / UX is so much nicer.
My usual impression of GitLab is that it has too many functions I don't ever use, so the things I actually do want (code, issues, PRs, user permissions) are needlessly hidden. What's your workflow that you find GitLab's UX to be nicer than Gitea's?
That was my take too. It is a big project with a lot of functionality. But, I never needed all of that functionality, so it just seemed bloated to me. I switched over to Gitea for self-hosted code repositories (non-public repos behind a firewall) a while back and haven't had any issues thus far.
I found gitea's interface to be so unusably bad that i switched to full-fat GitLab.
Gitea refused to do some perfectly sensible action- I think it had something to do with creating a fork of my own repo. Looking online, there's zero technical reason for this, and the explanation given was "this is how GitHub does things". Immediately uninstalled. I'm not here for this level of disrespect.
What exactly is the advantage of running something like GitLab vs what I do which is just a server with SSH and a file system? To create a new repo I do:
Then I just set my remote origin URL to example.com:repos/my-proj.git
The filesystem on example.com is backed up daily. Since I do not need to send myself pull requests for personal projects and track my own TODOs and issues via TODO.md, what exactly am I missing? I have been using GitHub for open source projects and work for years but for projects where I am the only author, why would I need a UI besides git and my code editor of choice?
Collaboration and specifically collaboration with non git nerds. That's primarily what made GitHub win the VCS wars back in the day. The pull request model appealed to anyone who didn't want to learn crafting and emailing patches.
Yes, it's the PRs, and there is a misunderstanding I think because the OP and the GP's use-cases are quite different. Self-hosting your own repository on a remote server (and perhaps sharing it with 1 or 2 collaborators) is simple but quite different than running a public open source project that solicits contributions.
You don’t! Forges are for collaboration outside of the rhythm of git commits. You’re happy to make a new commit every time you have something to add to an issue. With X issues and Y comments a hour, polluting the git timeline with commentary is going to become unhelpful.
It's a shame that GitHub won the CI race by sheer force of popularity and it propagates its questionable design decisions. I wish more VCS platforms would base their CI systems on Gitlab, which is much much better than GitHub actions.
If you want even more minimal, Gerrit is structured as a Java app with no external dependencies like databases, and stores all it's configuration and runtime information on the filesystem, mostly as data structures in the git repos.
Shared filesystems is all you need to scale/replicate it, and it also makes the backup process quite simple.
I might be one of the few that is intrigued by this being that it’s Java but this looks really neat. Does it do git repositories like gitea, GitHub, etc, or is it more of a project management site for the repositories? They describe it as “code review”, so I wasn’t sure.
I’m a little put off on the google connection but it seems like it could run rather independently.
It's hyper-focused on code review and CI integration, which it does really well.
It's not focused on all the other stuff that people think of in code forges (hosting the README in a pretty way, arbitrary page hosting, wiki, bug tracking, etc.) but can be integrated with 3rd party implementations of those fairly trivially.
I personally find the rebase and stacking commit focused method of integration that Gerrit uses to be easier and cleaner than PR's in GitHub.
Having done CI integrations with both, Gerrit's APIs send pre- and post-merge events through the same channel, instead of needing multiple separate listeners like GitHub.
One concern the post brings up - single point of failure. Yes, in this case, blah blah big company microsoft blah blah (I don't disagree, but..). I'm more worried about places like Paypal/Google/etc banning than the beast from Redmond.
Self hosting, it's still a single point of failure and the article arguing "mirroring", well... it allows redundancy with reads but writes?
Redundancy for read access to the source code is a concern for Dillo. Some years ago, the domain name registration lapsed, and was promptly bought by an impersonator, taking the official repository offline. If it hadn't been for people having clones of the repository, the source code and history would have been lost.
How do people find your online project and know it's you (instead of an impersonator) without relying on an authority, like GitHub accounts or domain names? It is a challenging problem with no good solution. At least now the project is alive again and more resilient than before.
I found the banning comment to be odd. That said, all it really takes is a policy change (something that I see as far more likely in Microsoft's case) or simply a change in the underlying software (again, somewhat likely with Microsoft) for the platform to become unusable for them. Keep in mind that Dillo is a browser for those who can't on don't want to fit into the reality of the modern web.
I think it’s a fair concern, e.g. forgejo is a simple directory on disk, with an option to make that into an S3 storage. It really is a no brainer to set that up for as much resilience as necessary with various degrees of “advanced” depending on your thread model and experience. The lack of a FAANG/M in the equation makes it even more palatable.
We've been looking at Forgejo too. Do you have any experience with Forgejo Actions you can share? That is one thing we are looking at with a little trepidation.
We use them in our shop. It's quite straightforward if you're already familiar with Github Actions. The Forgejo runner is tiny and you can build it even on unsupported platforms (https://code.forgejo.org/forgejo/runner) e.g. we've setup our CI to also run on Macs (by https://www.oakhost.net) for App Store related builds. It's really quite a joy :)
Are you building MacOS apps? More specifically, are you doing code signing and notarization and stamping within CI? If so, is this written up somewhere? I really struggled with getting that working on GitLab. I did have it working, but was always searching for alternatives.
I setup actions yesterday. There are a few tiny rough edges, but it is definitely working for me. I'm using it to build my hugo blog which "sprinklylls" in a Svelte app, so it needs to have nodejs + hugo and a custom orchestrator written in Zig.
What I did:
* used a custom docker image on my own registry domain with hugo/nodejs and my custom zig app
* no problems
* store artifacts
* required using a different artifact "uses" v3 instead of v4 (uses: actions/upload-artifact@v3)
* An example of how there are some subtle differences between GitHub Actions, but IMHO, this is a step forward because GitLab CI YAML is totally different
* can't browse the artifacts like I can on gitlab, only allows download of the zip. Not a big deal, but nice to verify without littering my Downloads folder.
* Unable to use "forgejo-runner exec" which I use extensively to test whether a workflow is correct before pushing
* Strange error: "Error: Open(/home/runner/.cache/actcache/bolt.db): timeout"
* I think GitLab broke this feature recently as well!
* Getting the runner to work with podman and as a service was a little tricky (but now works)
* Mostly because of the way the docker socket is not created by default on podman
* And the docker_host path is different inside the runner config file.
* There are two config files, one (JSON) is always stored in .runner and contains the auth information and IP, and the other is YAML and runner needs the -c switch to specify it, and has the config of the runner (docker options, etc). It's a bit strange there are two files IMHO.
This will occur if you have a `forgejo-runner daemon` running while you try to use `exec` -- both are trying to open the cache database, and only the first to open it can operate. You could avoid this by changing the cache directory of the daemon by changing `cache.dir` in the config file, or run the two processes as different users.
> It's a bit strange there are two files IMHO.
The `.runner` file isn't a config file, it's a state file -- not intended for user editing. But yes, it's a bit odd.
> GitHub seems to encourage a "push model" in which you are notified when a new event occurs in your project(s), but I don't want to work with that model. Instead, I prefer it to work as a "pull model", so I only get updates when I specifically look for them.
I agree with the sentiment, but want to point out that email can be used to turn push into pull, by auto-filtering the respective email notifications into a separate dedicated email folder, which you can choose to only look at when you want.
> To avoid this problem, I created my own bug tracker software, buggy, which is a very simple C tool that parses plain Markdown files and creates a single HTML page for each bug.
> Additionally, GitHub seems to encourage a "push model" in which you are notified when a new event occurs in your project(s), but I don't want to work with that model. Instead, I prefer it to work as a "pull model", so I only get updates when I specifically look for them. This model would also allow me to easily work offline. Unfortunately, I see that the same push model has been copied to alternative forges.
Someone kind enough to explain this to me? What's the difference between push model and pull model? What about push model makes it difficult to work offline?
I would love to see more projects use git-bug, which works very well for offline collaboration. All bug tracker info is stored in the repo itself. https://github.com/git-bug/git-bug
It still needs work to match the capabilities of most source forges, but for small closed teams it already works very well.
Reminder that POP and IMAP are protocols, and nothing stops a code forge—or any other website—from exposing the internal messaging/notification system to users as a service on the standard IMAP ports; no one is ever required to set up a bridge/relay that sends outgoing messages to, say, the user's Fastmail/Runbox/Proton/whatever inbox. You can just let the user point their IMAP client to _your_ servers, authenticate with their username and password, and fetch the contents of notifications that way. You don't have to implement server-to-server federation typically associated with email (for incoming messages), and you don't have to worry about deliverability for outgoing mail.
All of this makes sense. Thank you for explaining. I don't think I understand the difference though.
Like are they calling the "GitHub pull request" workflow as the push model? What is "push" about it though? I can download all the pull request patches to my local and work offline, can't I?
GitHub pull request pushes you a notification/e-mail to handle the merge, and you have to handle the pull request mostly online.
I don't know how you can download the pull request as a set of patches and work offline, but you have to open a branch, merge the PR to that branch, test the things and merge that branch to relevant one.
Or you have to download the forked repository, do your tests to see the change is relevant/stable whatnot and if it works, you can then merge the PR.
---
edit: Looks like you can get the PR as a patch or diff, and is trivial, but you have to be online again to get it that way. So, getting your mails from your box is not enough, you have to get every PR as a diff, with a tool or manually. Then you have to organize them. e-mails are much more unified and simple way to handle all this.
---
In either case, reviewing the changes is not possible when you're offline, plus the pings of the PRs is distracting, if your project is popular.
Seems like you found it, but for others: one of the easiest ways to get a PR's diff/patch is to just put .diff or .patch at the end of its URL. I use this all the time!
It’s bonkers to me that there isn’t a link to the plan patch from the page. Yes, it’s trivial to add a suffix once you know, but lots of people don’t—as evidenced by this thread.
Discoverability in UX seems to have completely died.
You could set up a script that lives in the cloud (so you don't have to), receives PRs through webhooks, fetches any associated diff, and stores them in S3 for you to download later.
Maybe another script to download them all at once, and apply each diff to its own own branch automatically.
Almost everything about git and github/gitlab/etc. can be scripted. You don't have to do anything on their website if you're willing to pipe some text around the old way.
I would say it is time/life management: push tells you to do something now. In pull I check each Friday afternoon what's up in my hobby project and work on it for a few hours and then call it a day and be uninterrupted till next week.
We are in the disapora phase; there is a steady stream of these announcements, each with a different GitHub alternative. I speculate that within a few months, the communities will have settled on a single dominant one. I'm curious if it will be one of the existing ones, or something new. Perhaps a well-known company or individual will announce one; it will have good marketing, and dominate.
This has been going on for a decade, at the beginning it was projects moving to Gitlab now there's a lot of alternative projects but GitHub is still the only one that counts for discoverability. This is a very small minority of projects that move away from Github and it's way too early to declare GitHub doomed.
No different than everyone talking about the next “iPhone Killer” when someone other than Apple releases a phone. Although, I think that rhetoric has largely died down.
Github is fine for discoverability but as a development platform I think it's going to die. Public issues/PRs are a cesspool now and going to get worse, and agentic workflows are going to drive companies to want to hide how the sausage is made. People will gradually migrate to alternatives and mirror to Github while it remains relevant.
Different devs have different preferred ways to work and collaborate. I doubt the FOSS community will converge on a single solution. I think we’re at a point of re-decentralization, where devs will move their projects to the forge that satisfies their personal/group requirements for control, hosting jurisdiction, corporate vs community ownership, workflow, and uptime.
This is due to increasing competition in the source forge space. It’s good that different niches can be served by their preferred choice, even if it will be less convenient for devs who want to contribute a patch on a more obscure platform.
> I speculate that within a few months, the communities will have settled on a single dominant one.
The solutions on the roadmap are not centralized as GitHub. There is a real initiative to promote federation so we would not need to rely on one entity.
I love this, and hope it works out this way. Maybe another way to frame it: In 2 years, what will the "Learn Python for Beginners" tutorials direct the user towards? Maybe there will not be a consensus, but my pattern-matching brain finds one!
> Perhaps a well-known company or individual will announce one; it will have good marketing, and dominate.
Hah, exactly what we’re attempting with Tangled! Some big announcements to come fairly soon. We’re positioning ourselves to be the next social collab platform—focused solely on indies & communities.
It looks like all that they’re doing is griping over frontends and interfaces to do all the custodial work other than version control (ie., all baked-in git provisions).
GitLab is too heavyweight for many projects. It’s great for corporations or big organizations like GNOME, but it’s slow and difficult to administer. It has an important place in the ecosystem, but I doubt many small projects will choose it over simpler alternatives like Codeberg.
Gitlab is part of the reason I'm thinking along these lines: It has been around for a while, as a known, reasonably popular alternative to GitHub. So, I expected the announcement to be "We moved to GitLab", Yet, what I observe is "We moved to CodeHouse" or "We moved to Source-Base" The self-hosting here with mirrors to two one I'm not familiar with is another direction.
I freely admit I am out of my depth and have nothing educational to add on the subject. I have but four things to add on this subject:
1. Oh! It's "d.i.l.l.o."! I misread that as something else.
2. After reading many comments in this thread, I must admit I am stupefied at the sheer amount of stuff that can go into merely setting up and maintaining a version control system for a project.
3. I have cited every one of the same problems OP enumerates as my argument for switching new projects over to self-hosted fossil. It also helps a good bit with #2 above when you're a small organization and you're the sole software engineer, sysadmin, and tier >1 support. It's a much simpler VCS that's closer to using perforce in my experience. YMMV, but it's the kind of VCS that doesn't qualify as a skill on a resume.
4. I also find GH deploy keys frustrating because I can't use the same key for multiple repositories. I have 3 separate applications that each run on 4 machines in my cluster, and I have to configure 12 separate deploy keys on GitHub and in my ~/.ssh/config file.
Off-topic, but as a non-native speaker I’m curious if it’s common to say “more and more slow” as opposed to “slower and slower” (maybe to emphasize the adjective?)
For real. I've been hearing the interface is slow and requires Javascript for years and never really paid much mind, it worked for me. But lately the page loading has gotten abusively slow. I don't think it can be simply blamed on React because that move was made long before this started.
I've taken to loading projects in github.dev for navigating repos so I pay the js tax just once and it's fine for code reading. But navigating PRs and actions is terrible.
> GitHub has been useful to store all repositories of the Dillo project, as well as to run the CI workflows for platforms in which I don't have a machine available (like Windows, Mac OS or some BSDs).
The post does not mention CI anywhere else, are they doing anything with it, keeping it on GitHub, or getting rid of it?
> Furthermore, the web frontend doesn't require JS, so I can use it from Dillo (I modified cgit CSS slightly to work well on Dillo).
That sounds like a bad approach to developing a Web browser, surely it would be better to make Dillo correctly work with the default cgit CSS (which is used by countless projects)?
No doubt this is desirable. However, adding all the CSS features required to support cgit may have been a lot more work than editing cgit's CSS. It's an attempt at avoiding yak shaving; adding recursive sub-projects that balloon a project's scope of work far beyond the original plan.
Dillo is actively developed, and the project of "migrate away from github" is complete, so now other work can be started and completed (like adding the CSS features required to support mainline cgit).
Although I'm not a fan of GH, I appreciate the ability to see how popular/valid some project is by looking at the number of stars (I know this is far from a perfect signal).
I'm much less likely to try projects that have a low number of stars, or projects in different places.
Another social issue on GitHub: you cannot use the "good first issue" tag on a public repository without being subjected to low quality drive-by PRs or AI slop automatically submitted by someone's bot.
I think the issue with centralization is still understated. I know developers who seem to struggle reading code if it's not presented by VS Code or a GitHub page. And then, why not totally capture everyone into developing just with GitHub Codespaces?
This is exactly what well-intentioned folk like to see: it's solving everyone's problems! Batteries included, nothing else is needed! Why use your own machine or software that doesn't ping into a telemetry hell-hole of data collection on a regular basis?
> To avoid this problem, I created my own bug tracker software, buggy, which is a very simple C tool that parses plain Markdown files and creates a single HTML page for each bug.
I love this. I used to be a big fan of linear (because the alternatives were dog water), but this also opened the question "why even have a seperate, disconnected tool?"
Most of my personal projects have a TODO.md somewhere with a list of things i need to work on. If people really need a frontend for bugs, it wouldn't be more than just rendering that markdown on the web.
Well, if your bugs can be specified clearly in plain text and plain text only, then yeah, I'd also advocate for this approach. Unfortunately, that's not really the case in any bigger software project. I need screenshots, video recordings that are 100 megs, cross-issue linking etc. I hate JIRA (of course) but it gets it right.
>frontend barely works without JavaScript, ... In the past, it used to gracefully degrade without enforcing JavaScript, but now it doesn't.
And the github frontend developers are aware of these accessibility problems (via the forums and bug reports). They just don't care anymore. They just want to make the site appear to work at first glance which is why index pages are actual text in html but nothing else is.
I'd love to hear the inside story of GitHub's migration of their core product features to React.
It clearly represents a pretty seismic cultural change within the company. GitHub was my go-to example of a sophisticated application that loaded fast and didn't require JavaScript for well over a decade.
The new React stuff is sluggish even on a crazy fast computer.
My guess is that the "old guard" who made the original technical decisions all left, and since it's been almost impossible to hire a frontend engineer since ~2020 or so that wasn't a JavaScript/React-first developer the weight of industry fashion became too much to resist.
But maybe I'm wrong and they made a technical decision to go all-in on heavy JavaScript features that was reasoned out by GitHub veterans and accompanied by rock solid technical justification.
GitHub have been very transparent about their internal technical decisions in the past. I'd love to see them write about this transition.
> But beyond accessibility and availability, there is also a growing expectation of GitHub being more app-like.
> The first case of this was when we rebuilt GitHub projects. Customers were asking for features well beyond our existing feature set. More broadly, we are seeing other companies in our space innovate with more app-like experiences.
> Which has led us to adoption React. While we don’t have plans to rewrite GitHub in React, we are building most new experiences in React, especially when they are app-like.
> We made this decision a couple of years ago, and since then we’ve added about 250 React routes that serve about half of the average pages used by a given user in a week.
It then goes on to talk about how mobile is the new baseline and GitHub needed to build interfaces that felt more like mobile apps.
(Personally I think JavaScript-heavy React code is a disaster on mobile since it's so slow to load on the median (Android) device. I guess GitHub's core audience are more likely to have powerful phones?)
For contrast, gitea/forgejo use as little JavaScript as possible, and have been busy removing frontend libraries over the past year or so. For example, jquery was removed in favor of native ES6+.
Let them choke on their "app-like experience", and if you can afford it, switch over to either one. I cannot recommend it enough after using it "in production" daily for more than five years.
I honestly believe that the people involved likely already wanted to move over to React/SPAs for one reason or another, and were mostly just searching for excuses to do so - hence these kind of vague and seemingly disproportional reasons. Mobile over desktop? Whatever app-like means over performance?
Non-technical incentives steering technical decisions is more common than we'd perhaps like to admit.
It's 1 step forward 2 steps back with this "server side rendering" framing of the issue and in practice observing Microsoft Github's behaviors. They'll temporarily enable text on the web pages of the site in response to accessibility issues then a few months later remove it on that type of page and even more others. As that thread and others I've participated in show this is a losing battle. Microsoft Github will be javascript application only in the end. Human people should consider moving their personal projects accordingly. For work, well one often has to do very distasteful and unethical things for money. And github is where the money is.
There are ways around some of the issues there, such as using the GitHub API (I almost exclusively use the API), and/or using a user script (see below). Furthermore, on GitHub and on some other version control hosting services (such as GitLab), you can change "blob" to "raw" in the URL to access the raw files. However, as they say, it can be mirrored on multiple services (including self-hosting), and this would be a good idea, whether or not you use GitHub, so if you do not like GitHub then you do not have to use it.
Note that for some of the web pages on GitHub, the data is included as JSON data within the HTML file, although this schema is undocumented and sometimes changes. User scripts (which you might have to maintain due to these changes) can be used to display the data without any additional downloads from the server, and they can be much shorter and faster than GitHub's proprietary scripts.
Using a GPG key to sign the web page and releases is helpful (for the reasons they explain there), although there are some other things that might additionally help (if the conspiracy was not making it difficult to do these things with X.509 certificates in many ways).
GitHub frontend is mostly still their own [1] Web Components based library. They use Turbo to do client side reloading.
They have small islands of React based views like Projects view or reworked Pull Request review.
The thing is, even if you disable JavaScript, sites still load sloow. Try it yourself. Frontend code doesn’t seem to be the bottleneck.
I hope you will continue maintaining a mirror in GH. Some tools like deepwiki are excellent resources to learn about a codebase when their is not much documentation going around. But these tools only support pulling from GH.
A neat thing about GitHub is that every file on it can be accessed from URLs like https://raw.githubusercontent.com/simonw/llm-prices/refs/hea... which are served through a CDN with open CORS headers - which means any JavaScript application running anywhere can access them.
It's less about pulling and more about tools like DeepWiki making the assumption that its inputs live in GitHub, so repository URLs are expected to be GH URLs as opposed to a URL to a git repository anywhere.
That being said, there's no reason for tools like it to have those constraints other than pushing users into an ecosystem they prefer (i.e. GitHub instead of other forges).
What would be nice is an aggregator site one could submit to and everyone just host it on their own internet connection, and nobody be dependent on a source for hosting their projects. Maybe something like bluesky with the AT protocol but with git repositories.
For the past few days I've been playing with Forgejo (from the Codeberg people). It is fantastic.
The biggest difference is memory usage. GitLab is Ruby on Rails and over a dozen services (gitlab itself, then nginx, postgrest, prometheus, etc). Forgejo is written in go and is a single binary.
I have been running GitLab for several years (for my own personal use only!) and it regularly slowly starts to use up the entirety of the RAM on a 16GB VM. I have only been playing with Forgejo for a few days, but I am using only 300MB of the 8 GB of RAM I allocated, and that machine is running both the server and a runner (it is idle but...).
I'm really excited about Forgejo and dumping GitLab. The biggest difference I can see if that Forgejo does not have GraphQL support, but the REST API seems, at first glance, to be fine.
EDIT: I don't really understand the difference between gitea and forgejo. Can anyone explain? I see lots of directories inside the forgejo volume when I run using podman that clearly indicate they are the same under the hood in many ways.
EDIT 2: Looks like forgejo is a soft fork in 2022 when there were some weird things that happened to governance of the gitea project: https://forgejo.org/compare-to-gitea/#why-was-forgejo-create...
Our product studio with currently around 50 users who need daily git access moved to a self hosted forgejo nearly 2 years ago.
I really can’t overstate the positive effects of this transition. Forgejo is a really straightforward Go service with very manageable mental model for storage and config. It’s been easy and cheap to host and maintain, our team has contributed multiple bugfixes and improvements and we’ve built a lot of internal tooling around forgejo which otherwise would’ve required a much more elaborate (and slow) integration with GitHub.
Our main instance is hosted on premise, so even in the extremely rare event of our internet connection going offline, our development and CI workflows remain unaffected (Forgejo is also a registry/store for most package managers so we also cache our dependencies and docker images).
Speaking of authentication it also works as an openid provider meaning you can authenticate every other web software that supports it to Forgejo... which in turn can look for users in other sources.
It also has wikis.
Its an underrated piece of software that uses a ridiculous small amount of computer resources.
Backups are handled by zfs snapshots (like every other server).
We've also had at least 10× lower downtime compared to github over the same period of time, and whatever downtime we had was planned and always in the middle of the night. Always funny reading claims here that github has much better uptime than anything self-hosted from people who don't know any better. I usually don't even bother responding anymore.
Like others, I've switch to Gitea, but whenever I do visit gitlab I can't help but think the design / UX is so much nicer.
Gitea refused to do some perfectly sensible action- I think it had something to do with creating a fork of my own repo. Looking online, there's zero technical reason for this, and the explanation given was "this is how GitHub does things". Immediately uninstalled. I'm not here for this level of disrespect.
The filesystem on example.com is backed up daily. Since I do not need to send myself pull requests for personal projects and track my own TODOs and issues via TODO.md, what exactly am I missing? I have been using GitHub for open source projects and work for years but for projects where I am the only author, why would I need a UI besides git and my code editor of choice?
-> convenience, collaboration, mobility
Some forges even include(d) instant messaging!
https://secure.phabricator.com/Z1336
If you ever find yourself wishing for a web UI as well, there's cgit[1]. It's what kernel.org uses[2].
[1]: https://git.zx2c4.com/cgit/ [2]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
It's a shame that GitHub won the CI race by sheer force of popularity and it propagates its questionable design decisions. I wish more VCS platforms would base their CI systems on Gitlab, which is much much better than GitHub actions.
Shared filesystems is all you need to scale/replicate it, and it also makes the backup process quite simple.
I’m a little put off on the google connection but it seems like it could run rather independently.
It's not focused on all the other stuff that people think of in code forges (hosting the README in a pretty way, arbitrary page hosting, wiki, bug tracking, etc.) but can be integrated with 3rd party implementations of those fairly trivially.
even browsing the git repos it hosts uses an embedded version of another tool (gitiles).
https://gerrithub.io/ is a public instance
Having done CI integrations with both, Gerrit's APIs send pre- and post-merge events through the same channel, instead of needing multiple separate listeners like GitHub.
Self hosting, it's still a single point of failure and the article arguing "mirroring", well... it allows redundancy with reads but writes?
It's an interesting take on a purist problem.
How do people find your online project and know it's you (instead of an impersonator) without relying on an authority, like GitHub accounts or domain names? It is a challenging problem with no good solution. At least now the project is alive again and more resilient than before.
What I did:
This will occur if you have a `forgejo-runner daemon` running while you try to use `exec` -- both are trying to open the cache database, and only the first to open it can operate. You could avoid this by changing the cache directory of the daemon by changing `cache.dir` in the config file, or run the two processes as different users.
> It's a bit strange there are two files IMHO.
The `.runner` file isn't a config file, it's a state file -- not intended for user editing. But yes, it's a bit odd.
I agree with the sentiment, but want to point out that email can be used to turn push into pull, by auto-filtering the respective email notifications into a separate dedicated email folder, which you can choose to only look at when you want.
This is in search of a problem.
The hacker spirit alive and well.
Someone kind enough to explain this to me? What's the difference between push model and pull model? What about push model makes it difficult to work offline?
When you're working with e-mails, you sync your relevant IMAP box to local, pulling all the proposed patches with it, hence the pull model.
Then you can work through the proposed changes offline, handle on your local copy and push the merged changes back online.
It still needs work to match the capabilities of most source forges, but for small closed teams it already works very well.
Like are they calling the "GitHub pull request" workflow as the push model? What is "push" about it though? I can download all the pull request patches to my local and work offline, can't I?
I don't know how you can download the pull request as a set of patches and work offline, but you have to open a branch, merge the PR to that branch, test the things and merge that branch to relevant one.
Or you have to download the forked repository, do your tests to see the change is relevant/stable whatnot and if it works, you can then merge the PR.
---
edit: Looks like you can get the PR as a patch or diff, and is trivial, but you have to be online again to get it that way. So, getting your mails from your box is not enough, you have to get every PR as a diff, with a tool or manually. Then you have to organize them. e-mails are much more unified and simple way to handle all this.
---
In either case, reviewing the changes is not possible when you're offline, plus the pings of the PRs is distracting, if your project is popular.
Random PR example, https://github.com/microsoft/vscode/pull/280106 has a diff at https://github.com/microsoft/vscode/pull/280106.diff
Another thing that surprises some is that GitHub's forks are actually just "magic" branches. I.e the commits on a fork exist in the original repo: https://github.com/microsoft/vscode/commit/8fc3d909ad0f90561...
Discoverability in UX seems to have completely died.
It's yet another brick on the wall of the garden. That's left there for now, but for how long?
IOW, It's deliberate. Plus, GitHub omits to add trivial features (e.g.: deleting projects, "add review" button, etc.) while porting their UI.
It feels like they don't care anymore.
Maybe another script to download them all at once, and apply each diff to its own own branch automatically.
Almost everything about git and github/gitlab/etc. can be scripted. You don't have to do anything on their website if you're willing to pipe some text around the old way.
> Almost everything about git and github/gitlab/etc. can be scripted.
Moving away from GitHub is more philosophical than technical at this point. I also left the site the day they took Copilot to production.
This is due to increasing competition in the source forge space. It’s good that different niches can be served by their preferred choice, even if it will be less convenient for devs who want to contribute a patch on a more obscure platform.
The solutions on the roadmap are not centralized as GitHub. There is a real initiative to promote federation so we would not need to rely on one entity.
Hah, exactly what we’re attempting with Tangled! Some big announcements to come fairly soon. We’re positioning ourselves to be the next social collab platform—focused solely on indies & communities.
How do you speculate the candidacy for email.
At least GitHub adds new features over time.
Gitlab has been removing features in favor of more expensive plans even after explicitly saying they wouldn’t do so.
Horses for courses I guess ¯\_(ツ)_/¯
Not as quickly as they add anti-features, imho.
1. Oh! It's "d.i.l.l.o."! I misread that as something else.
2. After reading many comments in this thread, I must admit I am stupefied at the sheer amount of stuff that can go into merely setting up and maintaining a version control system for a project.
3. I have cited every one of the same problems OP enumerates as my argument for switching new projects over to self-hosted fossil. It also helps a good bit with #2 above when you're a small organization and you're the sole software engineer, sysadmin, and tier >1 support. It's a much simpler VCS that's closer to using perforce in my experience. YMMV, but it's the kind of VCS that doesn't qualify as a skill on a resume.
4. I also find GH deploy keys frustrating because I can't use the same key for multiple repositories. I have 3 separate applications that each run on 4 machines in my cluster, and I have to configure 12 separate deploy keys on GitHub and in my ~/.ssh/config file.
https://wkoszek.github.io/easyforgejo/
The best reason right here.
I've taken to loading projects in github.dev for navigating repos so I pay the js tax just once and it's fine for code reading. But navigating PRs and actions is terrible.
The post does not mention CI anywhere else, are they doing anything with it, keeping it on GitHub, or getting rid of it?
> Furthermore, the web frontend doesn't require JS, so I can use it from Dillo (I modified cgit CSS slightly to work well on Dillo).
That sounds like a bad approach to developing a Web browser, surely it would be better to make Dillo correctly work with the default cgit CSS (which is used by countless projects)?
Dillo is actively developed, and the project of "migrate away from github" is complete, so now other work can be started and completed (like adding the CSS features required to support mainline cgit).
Another social issue on GitHub: you cannot use the "good first issue" tag on a public repository without being subjected to low quality drive-by PRs or AI slop automatically submitted by someone's bot.
I think the issue with centralization is still understated. I know developers who seem to struggle reading code if it's not presented by VS Code or a GitHub page. And then, why not totally capture everyone into developing just with GitHub Codespaces?
This is exactly what well-intentioned folk like to see: it's solving everyone's problems! Batteries included, nothing else is needed! Why use your own machine or software that doesn't ping into a telemetry hell-hole of data collection on a regular basis?
I love this. I used to be a big fan of linear (because the alternatives were dog water), but this also opened the question "why even have a seperate, disconnected tool?"
Most of my personal projects have a TODO.md somewhere with a list of things i need to work on. If people really need a frontend for bugs, it wouldn't be more than just rendering that markdown on the web.
Well, if your bugs can be specified clearly in plain text and plain text only, then yeah, I'd also advocate for this approach. Unfortunately, that's not really the case in any bigger software project. I need screenshots, video recordings that are 100 megs, cross-issue linking etc. I hate JIRA (of course) but it gets it right.
I'm not part of the project at all, but this is the only offline code review system I've found.
And the github frontend developers are aware of these accessibility problems (via the forums and bug reports). They just don't care anymore. They just want to make the site appear to work at first glance which is why index pages are actual text in html but nothing else is.
It clearly represents a pretty seismic cultural change within the company. GitHub was my go-to example of a sophisticated application that loaded fast and didn't require JavaScript for well over a decade.
The new React stuff is sluggish even on a crazy fast computer.
My guess is that the "old guard" who made the original technical decisions all left, and since it's been almost impossible to hire a frontend engineer since ~2020 or so that wasn't a JavaScript/React-first developer the weight of industry fashion became too much to resist.
But maybe I'm wrong and they made a technical decision to go all-in on heavy JavaScript features that was reasoned out by GitHub veterans and accompanied by rock solid technical justification.
GitHub have been very transparent about their internal technical decisions in the past. I'd love to see them write about this transition.
Relevant quote:
> But beyond accessibility and availability, there is also a growing expectation of GitHub being more app-like.
> The first case of this was when we rebuilt GitHub projects. Customers were asking for features well beyond our existing feature set. More broadly, we are seeing other companies in our space innovate with more app-like experiences.
> Which has led us to adoption React. While we don’t have plans to rewrite GitHub in React, we are building most new experiences in React, especially when they are app-like.
> We made this decision a couple of years ago, and since then we’ve added about 250 React routes that serve about half of the average pages used by a given user in a week.
It then goes on to talk about how mobile is the new baseline and GitHub needed to build interfaces that felt more like mobile apps.
(Personally I think JavaScript-heavy React code is a disaster on mobile since it's so slow to load on the median (Android) device. I guess GitHub's core audience are more likely to have powerful phones?)
Let them choke on their "app-like experience", and if you can afford it, switch over to either one. I cannot recommend it enough after using it "in production" daily for more than five years.
Non-technical incentives steering technical decisions is more common than we'd perhaps like to admit.
no-one cares about the github mobile experience
microsoft making the windows 8 mistake all over again
It's where I interact with notifications about new issues and PRs for one thing. I doubt I'm alone there.
I'd like to see their logs about this.
Seems a small audience to optimise for.
Note that for some of the web pages on GitHub, the data is included as JSON data within the HTML file, although this schema is undocumented and sometimes changes. User scripts (which you might have to maintain due to these changes) can be used to display the data without any additional downloads from the server, and they can be much shorter and faster than GitHub's proprietary scripts.
Using a GPG key to sign the web page and releases is helpful (for the reasons they explain there), although there are some other things that might additionally help (if the conspiracy was not making it difficult to do these things with X.509 certificates in many ways).
I went to gitlab from github due to Microsoft changes, my needs are very simple so far gitlab seems OK.
I also mirror just the current source on sdf.org via gopher. If gitlab causes issues this could very well become my main site.
Not only did they spend years rewriting the frontend from Pjax to I think React? They also manage to lost customer because of it.
[1] https://github.blog/engineering/architecture-optimization/ho...
Git pulling isn't unique to github and it works over http or ssh?
Demo: https://tools.simonwillison.net/cors-fetch?url=https%3A%2F%2...
That being said, there's no reason for tools like it to have those constraints other than pushing users into an ecosystem they prefer (i.e. GitHub instead of other forges).
Sourcehut is hosted in The Netherlands, and Codeberg in Germany.
[0]: https://tangled.org/
I suppose something like this with git and source code exists on Tor.
During the Arab Spring and Hong Kong protests, Bluetooth was used to share messages whilst the internet was cut off.
Obscurity is subjective. It might be obscure for you, and that's OK, but Dillo is not obscure for many people.
It's actually quite interesting, I recommend to read it!