Too much text? Too bad there's no TLDR section
I discovered a CMS solution years ago, at the time I was looking for a great and polished solution to replace my old and heavy Blogspot. At that time, I was still running with lit-html, the web component people were talking about, which was fast and built in to the modern web browser. But after working with it for several years, its weaknesses were exposed, and I thought I had to find a way to get rid of it. I am not going to say how bad it is now. But that's when I found a blog template for Next.js 10 with this CMS. I then deployed it, and that became my main solution over the next 5 years, until last week.
I was thinking, yeah, that's good when I use external services. It's a comprehensive solution dedicated to content and offers tons of things ready for production, but at what cost? From the view of a developer, there are several things that kept me moving away from it. Let's discuss.
First of all, I didn't own my content. By any means, I didn't upload my knowledge and content to it, any data, any resources. And the interesting part is, I never thought to back up my data, or there weren't any good options to do that. As a person who has strong ownership, I am mainly against this practice. What's the point of using a webhook to backup my data in a secondary way, when it should be the main source? Secondly, they are a startup. They offer subscriptions and they want us to subscribe. Obviously, this is how they survive, but the main point is, their service is expensive. I had very limited data storage to store my content, my bandwidth was also limited, most of the services were limited and required a subscription to use. They even discouraged the free plan and only mentioned the paid plan, starting at over 100 euros. That's how the story went. I then decided to move on and find myself a better solution.
Honestly, this was a tough decision. There's so many things you have to be aware of, and as a developer, you also have to please your dev mind. And then, I saw some news about PayloadCMS, a one click deployment template, to the Cloudflare platform. All serverless solutions, super cheap. I decided to boot it up, but...
Because this time, Cloudflare suggested people use Cloudflare Workers to deploy Next.js services with their OpenNextJs adapter, and with an infamous limitation of 3MB of total bundle size, this hit me hard more than anything. I mean I love their solutions, globalization, high availability, and the very niche mindset of designing their services. But besides that, there was a very good reason to worry about: vendor lock in. Right? You can't use Database D1 anywhere, you have to use it with Workers. The same issue applies to most of their other services. You have to tell me, hey, why don't you use them over HTTP? You have to be kidding me. The latency will kill your interests. That's a good story. We will get back to that later.
I went with Vercel, again, to host my own dedicated Payload CMS admin dashboard, and the rest of it was pretty easy. I needed a cheap database solution, as good as D1 but without the vendor lock in solution, and then I used Turso database. Honestly, the biggest temptation is that they offer much more storage than any other provider with pretty good SLOs. Similar to Cloudflare R1, I have 5GB of free storage, 1MB monthly bandwidth, a high availability service, and global distribution, which sounds like a dream. But the reality hit me hard. There was an incident, and somehow Turso was one of the providers with the slowest recovery. How funny. But I hope an incident this big doesn't happen ever again. But for the rest of the stack, yeah, I went with Cloudflare, such as R2 storage and Image transformations. Best of the best, right! Let's dive in to how I did that.
One of the biggest issues when designing and booting up the Payload CMS, was how to maintain backward compatibility with my legacy CMS, while also needing to have an open mind to adapt and welcome new ideas from the world today. And one of the things that made me feel the best was replacing the old legacy Markdown renderer with a new rich editor, Lexical Editor (Meta). This was a huge step and required me to redo every single directive I had with Markdown (ReMark). But you know what? Lexical is so much harder and their APIs are too confusing and complex. The frontend rendering part is easy enough, but the editor part, specifically how to introduce new features to the editor, is a big problem. I tried with a simple rendering, YouTube Nodes, but the code boilerplate was too much. One of the things I really missed is how easy it was to develop a new Markdown directive. You could just look at its syntax and understand it. But at least, for any content person, they love a rich editor more than just a plain text editor with some weird syntax. But I guess this is the price I have to pay for having my own CMS server.
If you are looking for some code snippets, then I am not giving you anything. I am lazy.
Then what else? Ah, here is the most frustrating part of all of this, the image solution. Back then, I had someone to provide me with pretty much anything. I could just query and put that on the website with dynamic resolutions so the rendering was smooth. But now, after getting rid of it, I have to feel the pain.
Let's go with the normal way of thinking. You use Cloudflare R2, and Image URL transformation comes with it. It feels like a dream. With just one image, you can have a dynamic URL and then output any image variant you need. How cool is that? Right? But the worst part is that transforming on the fly comes with a cost, and a cold start is unacceptable. Imagine a new user visits your website, and your website needs one to five seconds to load a single image. The user has to stare at the low resolution base64 version of the image for that entire time. I don't think they will have the will to visit your website again. I've come across all sorts of articles on the internet. Most people are happy with Vercel Image and Cloudflare Image. They are so cheap, so good. But I had to think of a way out for myself, something that fits my needs.
I know some of the providers, like Cloudflare or Cloudinary, have a service to pre-compute image variants, but they are so expensive. I have no idea why this type of service is so expensive these days. Let's blame the fact that people keep buying new high resolution TVs. I am just kidding. Yeah, I am going with the worst approach. Each time I upload a new image, I pre-compute eight variants of it and store them back in Cloudflare R2, and then build a responsive image component on the front end and load every single variant into it. I am happy. Should people follow me and do the same thing? Absolutely not. Why? One bad thing you can easily think about is that you now have to maintain your own infrastructure. The amount of images and variants can eat up your storage, and you also have to maintain the HA of your solution and every single edge case. Think about what happens if one of your workers fails to generate the images, or if a worker gets stuck and freezes the UI from responding, causing a bad user experience. You know, it comes down to the buy versus build issue, but in this case, I am happy to build.
With this big undertaking, I also have done something I wished I could do years ago when playing around with the template. I upgraded the whole project to Typescript. I introduced an infinite loading feature, and a filtering by category and tag system. Most of them I could only dream of years ago.
Let's talk about one more thing: how I migrated all of my content from that old CMS to Payload CMS. It's not that hard. Let's break this down into three parts: the media, the content, and the structural data. Commonly, all of them could easily be gathered with a script, the same way I exposed the content to the public. Now I used that same way to archive everything into some JSON files, a separate folder for media and metadata, and another one full of markdown that needed to be loaded. The main thing here was how to convert the old Markdown format to the Lexical state. Yeah, by using the official converter. I had to manually check every single post, but it wasn't that huge of a task, so it was fine though. The most annoying part was, again, the images. I didn't come up with the right decision at the start. I had to rerun the script to load, process, or compute the low resolution image, with each image having 8 variants, over and over again until I found myself happy with the result. Again and again, I really hated the fact that Cloudflare Image URL transformation was so slow. My website was like a dead place when I was relying on them. Yikes!
It's almost done there. By having a dedicated CMS server, I find myself on the bright side of the hill. I am able to customize my components, serve something only my stack can do, and most of all, it pleases my dev mind. I think this migration was really worth it in many ways. Besides some of the reasons I shared from the start, I now have a chance to learn new things and catch up with the modern front end community, as my previous experience with FE was from around five years ago. The industry is moving so fast, and I am now a part of it.
So, is Payload CMS that good? Not really. Let's wait for part 2 and find out!
And if you’re considering a similar migration, feel free to reach out or comment or pm me, I’d love to exchange notes.
