backend steps

  1. create dir backend empty

  2. create --init--.py

  3. requirement.txt

  4. database.py

download package using pip requiremnts.txt

add postgres real connection string in database.py

do alembic init

  1. Open alembic.ini and set your DB URL:

sqlalchemy.url = postgresql+psycopg2://username:password@localhost:5432/yourdbname
chevron-rightwhy alembic migrationhashtag

Good thinking 👍 You’re mixing two related but different concerns:

  • Schema migration management (Alembic vs manual SQL)

  • Security (SQL injection)

Let’s clarify both.


🔹 1. Is using Alembic “standard”?

Yes ✅

  • In most Python projects with SQLAlchemy, Alembic is the de facto standard for handling database schema migrations.

  • FastAPI, Flask, and Django projects usually include a migration tool (Alembic for SQLAlchemy, Flask-Migrate wrapper, Django migrations for Django).

  • Enterprises rely on migration frameworks because:

    • They keep track of schema changes in version control.

    • They ensure repeatability (same migrations can run in dev, staging, prod).

    • They reduce human error (no forgetting which ALTER TABLE you ran last month).


🔹 2. What if I just run raw SQL commands (without Alembic)?

That’s still possible — many people do it in small projects. But:

  • It’s not unsafe in terms of SQL injection, because schema changes are usually written by developers, not built from user input.

    ALTER TABLE users ADD COLUMN age INT;

    ← This won’t be SQL injected unless you dynamically build it from untrusted input (which no one does).

  • The real problem is maintainability and consistency:

    • Hard to remember what SQL changes you applied in dev vs prod.

    • No automatic rollback if something goes wrong.

    • New team members won’t know the schema history.


🔹 3. What do enterprises prefer?

Enterprises almost always prefer migration frameworks like:

  • Alembic (with SQLAlchemy / FastAPI / Flask).

  • Liquibase or Flyway (database-agnostic, often used in Java/Spring apps).

  • Django migrations (built into Django ORM).

Why?

  • Version control of schema → part of Git history.

  • Automated CI/CD pipelines can run migrations safely.

  • Audit/compliance → easy to prove which migration changed what and when.


Summary:

  • Alembic is the standard with SQLAlchemy (not just “safe,” but essential for versioning).

  • Raw SQL migrations aren’t unsafe but are hard to manage at scale.

  • Enterprises always prefer migration tools (Alembic, Flyway, Liquibase) because they bring automation, consistency, and traceability.


👉 Do you want me to show you how a typical enterprise pipeline runs Alembic migrations automatically (e.g., in CI/CD before deploying a new FastAPI version)?

now open in alembic/env,py

make sure this line exist

from database import SQLALCHEMY_DATABASE_URL, Base  
target_metadata = Base.metadata

now run

alembic revision --autogenerate -m "init tables"

this createdd new migration under alembic/version/migration.py

apply migration to database

below cmd create table

alembic upgrade head

Note: Use new neon dB account otherwise this will erase all ur previous tables

chevron-rightalembic care hashtag

Good question 👍 — Alembic can be dangerous if migrations are not handled carefully, because it can drop/alter tables, which may lead to data loss. Here are the main things to take care of when using Alembic:


✅ Best Practices to Prevent Data Loss with Alembic

  1. Never autogenerate blindly

    • alembic revision --autogenerate only detects schema changes, not whether dropping a column or table will cause data loss.

    • Always read the generated migration script before applying.

  2. Back up your database

    • Always run a pg_dump (PostgreSQL) or equivalent backup before running migrations in staging/prod.

    • Example for Postgres:

  3. Use --sql flag in production first

    • Instead of running migrations directly, generate SQL scripts first:

    • Review the script → if safe, then apply manually.

  4. Be careful with column drops/renames

    • Alembic treats rename as drop + add (unless you explicitly tell it).

    • You may lose column data if you don’t handle it with a custom op.alter_column.

  5. Split destructive vs non-destructive migrations

    • Example: First add a new column → backfill data → then drop old column in a separate migration.

  6. Test migrations locally before production

    • Always apply migrations on a local/staging copy of the DB to check behavior.

  7. Version control migration scripts

    • Never delete or overwrite migration files once pushed to a shared repo.

    • Each migration should represent a historical step in schema evolution.

  8. Have a rollback strategy

    • Alembic doesn’t guarantee down migrations will work in all cases (especially destructive ones).

    • If migration fails, you’ll need backups or manual SQL fixes.


👉 Rule of thumb:

  • Development DB → You can reset/drop if needed.

  • Production DB → Always backup, review migration script, and test before applying.


Do you want me to also show you how to safely rename a column without losing data in Alembic?

now table created

now create schema.py

crud.py

main.py

run app

uvicorn main:app --reload

swagger url to test crud operation

http://127.0.0.1:8000/docsarrow-up-right

now local test done now push to prod

create github repo

git init git remote add origin https://github.com//myrepo

ssh -T [email protected] --< check current configured user

create gitignore and add

chevron-rightgitignorehashtag

ad dockerfile

chevron-rightdockerfilehashtag

create repo in Docker hub

create credential token to add in GitHub secrets

pavan8767/3-tier-backend - repo

docker login -u pavan8767

read write token dckr_pat_1sPXfDvNsWivt-oLnbV1um4KvEg

chevron-rightgithub workflowhashtag

Last updated