Val Town Blog 10月02日
Val Town 迁移自 Supabase 至 Render 的经验
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Val Town 是一个后端代码片段托管平台,最初基于 Supabase 构建。团队因 Supabase 的本地开发环境问题、文档不完善、数据库备份导致的中断以及数据库自动扩容失败等问题,决定迁移至 Render 的简单 Postgres 设置。迁移过程中,团队简化了数据库使用方式,消除了触发器、存储过程和行级安全规则,并使用 drizzle-orm 构建 SQL 查询。这次迁移提高了开发效率和数据库稳定性,使团队能够更快地发布新功能。

🔧 Val Town 初期基于 Supabase 构建,利用其提供的数据库、访问控制、前端查询和认证等功能,实现了快速的后端应用开发。

🚫 使用 Supabase 过程中,团队面临本地开发环境不稳定、文档缺失或不完善、数据库备份导致服务中断以及自动扩容失败等问题,严重影响开发效率和稳定性。

🔄 迁移至 Render 的简单 Postgres 设置后,团队消除了触发器、存储过程和行级安全规则,改用 drizzle-orm 构建 SQL 查询,并通过代码管理数据库模式,显著提高了开发效率和数据库稳定性。

⏱️ 数据迁移过程复杂,团队通过优先迁移关键数据、优化 pg_dump 和 scp 速度、分阶段迁移日志数据等方式,成功将 40GB 数据迁移至新环境,并确保服务最小化中断。

🚀 迁移后,Val Town 实现了本地开发、预览分支测试和生产环境迁移的标准化流程,使新功能发布速度大幅提升,开发人员也能更安心地进行数据库操作。

Migrating from Supabase

on

In the last couple weeks, we’ve migrated Val Town away fromSupabase to a simpler database setup atRender. We’ve gotten a few questions about this, so wewanted to share what motivated the move, what we learned, and how we pulled itoff. Supabase is an enormously successful product with a lot of happy users, butwe ended up having a lot of problems getting to scale to our team’s needs. Yourmileage may vary, but we hope our experience is a useful data point.

Val Town was started by Steve Krouse in July 2022 as a siteto write, run, deploy, and share snippets of server-side JavaScript. Our usersdescribe us as “Codepen for the backend”. Folks use us to make little APIs,schedule cron jobs, and make little integrations between services, likedowntimedetectors,pricewatchers,programmaticnotificationservices, etc.

Steve built the initial version of Val Town on Supabase. I (Tom MacWright)joined in Jan 2023.

Supabase can be a way to build your entire application backend without writing abackend server. Supabase is a database, but it also gives you ways to controlaccess withrow security policies.Then it lets you query your database right from your frontend, by usingPostgREST. They integrategotrue as an authentication layer and letusers sign up for your service. And finally there’s a full-fledged UI that letsyou administrate the database and all of the related services without writing aline of code or using a CLI.

supabase.png

In effect, Supabase turns your database into an entire backend application. Itdoes so by using every Postgres trick in the book: not just row-level security,but we were using triggers, materialized views, database roles, and multipleschemas. Postgres is a tremendously powerful platform: you can go as far aswriting JavaScript embedded in SQL to write logic in the database.

When it worked well, it was spectacular: it was the speed of using Firebase,which I had heard about so many times, but with a fast-moving open source stack.

The biggest problem we encountered using Supabase was local development. When Ijoined, all of our development happened in production: everyone connected to theproduction database, all of our “migrations” where performed by modifying thelive database schema. We’d test migrations by duplicating tables in productionand migrating them before doing it for real. Sometimes we’d use the webinterface to change things like column types and indexes, which is the scariestof all – it doesn’t provide a SQL preview of what it’s about to do, andsometimeswhat it does is unexpected.

This, to me, was pretty scary. Usually engineering teams work with localdevelopment environments and staging environments, and only cautiously andbriefly touch the production database.

Thankfully, Supabase has been developing a toolchain for local development: theSupabase CLI. The CLI manages the Supabasestack locally: Postgres, gotrue, a realtime server, the storage API, an APIgateway, an image resizing proxy, a restful API for managing Postgres, theStudio web interface, an edge runtime, a logging system, and more – a total of11 Docker containers connected together.

Unfortunately, we just couldn’t get it to work. I hit everything frombroken Docker containers, to adatabase migration system thatcouldn’thandle custom roles, tomissing CLI help. We weren’t ableto get a local development environment working for more than a day at a time:the CLI would break, or migrations were generated incorrectly and couldn’t beapplied.

Part of the trouble with using the CLI was that the documentation isn’t quitewritten yet. The commandsupabase db remote commitis documented as “Commit Remote Changes As A New Migration”. The commandsupabase functions new is documented as “Create A New Function Locally.” Thedocumentation page is beautiful, but the words in it just aren’t finished. Theseare crucial commands to document: db remote commit actually affects your localdatabase and tweaks migrations. It’s really important to know what it’ll dobefore running it.

Unfortunately, the documentation on other parts isn’t much better: for example,the SupabaseRemix integrationhas a decent tutorial, but is missing any conceptual or API referencedocumentation. Only after spending a day or two implementing the Remixintegration did I realize that it would require a reset of all our user sessionsbecause using it meant switching from localStorage to cookie-basedauthentication.

Then came the downtime. Val Town is responsible for running scheduled bits ofcode: you can write a quick TypeScript function, click the Clock icon, andschedule it to run every hour from then on. We noticed that vals would stopworking every night around midnight. After a bit of sleuthing, it ended up thatSupabase was taking a database backup that took the database fully offline everynight, at midnight. This is sort of understandable, but what’s less lovely isthat it took a full week to get them to stop taking those backups and taking usoffline.

Now, to be fair, Val town currently pummels databases. It’s a write-heavyapplication that uses a lot of json columns and has a very large table inwhich we store all past evaluations. And Supabase was very helpful in theirsupport, even helping us rearchitect some of the database schema. Theapplication that we’re running is, in a way, a stress test for databaseconfiguration.

Supabase has a sort of unusual scheme for database size. Instead ofpre-allocating a large database and filling it over time, databases start offsmall and are auto-resized as they grow. Your database starts out at 8GB, thengets upgraded once it hits 90% of that to a database 50% larger. Unfortunately,there was a fluke in this system: one Sunday, their system failed to resize ourdatabase and instead we were put in read-only mode with the disk 95% full. Ifyou’ve dealt with systems like this before, you can guess what happens next.

If you get that close to your maximum disk size, you get a sort of catch-22:everything that you want to do to reduce the size on disk requires a littleextra temporary space, space that you don’t have. Maybe you want toVACUUM a table to cutdown on size - well, the VACUUM operation itself requires a little extrastorage, just enough to put you over 100% of disk utilization and cause arestart. Want to try and save a few bytes by changing the type of a column?You’ll hit 100% utilization and restart.

To make matters worse, the Supabase web user interface heavily relies on thedatabase itself - so the administration interface would crash when the databasecrashes. It’s nice and preferable to have separate systems: one that runs theadministration interface, another that is the thing being administrated.

Anyway, after a panicked Sunday afternoon four-alarm fire, I found a table thatwe were no longer using, which freed up 5 gigabytes of storage and let us getout of read-only mode. A few hours later the support team responded with anupdate.

Part of the moral of the story is that databases are hard. You could make acompany of just running databases reliably and be successful if you can bereliable and scalable. There are whole companies like that, likeCrunchyData. Database management is a hard andunforgiving job. But this is why we pay managed providers to be the wise expertswho know how to tune shared_buffers and take backups without interruptingservice. Sure, it’s nice to have a great user interface and extra features, buta rock solid database needs to be the foundation.

Under the hood, Supabase is just Postgres. If you want to sign up for theirservice, never use the web user interface, and build your application as ifSupabase was simply a Postgres database, you could. Many people do, in fact,because they’re one of the few providers with a free tier.

The hard part, though, is that if you use Supabase as a “Firebase alternative” –if you try to build a lot of your application layer into the database by usingtriggers, stored procedures, row-level security, and so on – you’ll hit someplaces where your typical Postgres tools don’t understand Supabase, andvice-versa.

For example, there are plenty of great systems for managing migrations. In theTypeScript ecosystem, Prisma anddrizzle-orm were at the top ofour list. But those migrations don’t support triggers, or row level security, sowe would have a hard time evolving our database in a structured way while stillfollowing the Supabase strategy. So migrations would be tough. Querying is toughtoo – querying while maintaining row-level-security isa subject of discussion in Prismabut it isn’t clear how you’d do it inKysely or drizzle-orm.

The same sorts of disconnects kept happening when we tried using our database asPostgres and then administrating it sometimes with Supabase. For example, we’dhave a json column (not jsonb), but because the Studio interface wasopinionated, itwouldn’t correctly show the column type.Or we’d have JSON values in a table, and be unable to export them because ofbroken CSV export in theweb interface. We’d see issues with composite foreign keysbeing displayed incorrectlyand be afraid of issues where modifying the schema via the UI ran unexpected anddestructive queries. A lotof these issues were fixed - the Studio now shows varchar and json typesinstead of a blank select box, and should export CSVs correctly.

In both directions, it felt like there were disconnects, that neither system wasreally capturing 100% of its pair. There were too many things that the typicaldatabase migration query tools couldn’t understand, and also things that wecould do the database directly that wouldn’t be correctly handled by theSupabase web interface.

Unfortunately, some of the limitations of the Supabase strategy trickled intoour application design. For example, we were building all of the controls arounddata access with Row-Level Security, which as the name implies, is row-level.There isn’t a clear way to restrict access to columns in RLS, so if you have a“users” table with a sensitive column like “email” that you don’t want everyoneto have access to, you have a bunch of tough solutions.

Maybe you cancreate a database view ofthat table, but it’s easy to shoot yourself in the foot and accidentally makethat publicly-readable. We ended up having three user tables - Supabase’sinternal table, auth.users, which is inaccessible to the frontend, aprivate_users table, which was accessible to the backend, and a users table,which was queryable from the frontend.

We also architected a lot ofdenormalized database tablesbecause we couldn’t write efficient queries using the default Supabase queryclient. Of course there are always tradeoffs between query-time and insert-timeperformance, but we were stuck at a very basic level, unable to do much queryoptimization and therefore pushed to either write a lot of small queries orstore duplicated data in denormalized columns to make it faster to query.

I suspect that there’s a way to make this work: to write a lot of SQL and relyheavily on the database. You can do anything in Postgres. We could write testswith pgTAP andwrite JavaScript inside of Postgres functions with plv8.But this would buy us even more into the idea of an application in our database,which for us made things harder to debug and improve.

Ultimately, we switched to using a “vanilla” Postgres service atRender.

We didn’t want to self-host Supabase, because the devops issues were only partof the problem: we just wanted a database. Render has been hosting the rest ofVal Town for a few months now and has been pretty great.Render Preview Environments areamazing: they spin up an entire clone of our whole stack — frontend remixserver, node api server, deno evaluation server, and now postgres database — forevery pull request. TheBlueprint Specification system is anice middle ground between manually configured infrastructure and the reams ofYAML required to configure something like Kubernetes.

We considered a couple other Postgres hosts, like CrunchyData, neon, RDS, etc,but it’s hard to beat Render’s cohesive and comprehensive feature-set. They arealso extremely competent and professional engineers; I’ve hosted applications onRender for years and have very few complaints.

The goal was to be able to run the database locally and test our migrationsbefore applying them in production. We rewrote our data layer to treat thedatabase as a simple persistence layer rather than an application. We eliminatedall the triggers, stored procedures, and row-level security rules. That logiclives in the application now.

We dramatically simplified how we’re using the database and started usingdrizzle-orm to build SQL queries.It works like a dream. Now our database schema is captured in code, we cancreate pull requests with proposed database migrations, and nobody connects tothe production database from their local computers. We were even able toeliminate a few tables because we could more efficiently query our data anddefine more accurate access controls.

Migrating the data took a week. The first issue was how large our database is:40gb.

We considered taking our service down for a couple hours to do the migration.But we’re a cloud service provider and we take that responsibility seriously: ifwe go down, that means our user’s API endpoints and cron jobs stop running.Downtime was our last resort.

The key insight was that 80% of our data was in our tracing table, which storesthe historical evaluation of every Val Town function run. This was historicaldata and isn’t essential to operations, so we chose to first migrate ourcritical data and then gradually migrate this log-like table.

The next problem was improving the download and upload speeds. We spun up an ec2sever next to Supabase in us-east-1 for the download and a server in Ohio Renderregion to be as close as possible for the downloads and upload,s respectively.After some more pg_dump optimizations, the download of everything but thetracing table took 15 minutes. We scp-ed it to the Render sever and did apg_restore from there, which took another 10 minutes. We then cut our productionservers over to the new Render Postgres database.

We informed our customers about the migration and that their tracing data wouldbe restored shortly in the background. There was a handful of new data that hadbeen created in the intervening ~30 minutes. We pulled that diff data manually,and uploaded it to the new database. The evaluations table dump took all night.Then the scp itself took a couple hours hours and $6 in AWS egress fees. It tookanother night to finish the upload of the historical tracing data.

Word to the wise: when you’re moving data into and out of databases, it pays todo it in the same network as the database servers.

Now that Val Town is humming along with a simplePostgres setup on Render, we’re able to evolve our database with traditional,old-fashioned migrations, and develop the application locally. It feelsstrangely (back to the) future-istic to have migrations that run locally, inpreview branches, and upon merging to main. I shipped likes (the ability to ❤️ aval) in a couple hours, and importantly, without anxiety. Back in our Supabasedays, we delayed features like that merely because touching the database schemawas scary.

Sometimes we miss the friendly table view of the Supabase interface. We watchtheir spectacular launch weeks with awe.Using Supabase was like peeking into an alternative, maybe futuristic, way tobuild applications. But now we’re spending ourinnovation tokens elsewhere, and have beenable to ship faster because of it.

Edit this page

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Val Town Supabase Render Postgres 数据库迁移 开发效率
相关文章