Java.DBMigrationTools.How do you write a simple “create table” migration?

Short interview answer (what you say aloud)

A simple CREATE TABLE migration is a versioned script that creates a table idempotently, defines primary key, types, constraints, and usually indexes.
It should be deterministic, safe to run once, and easy to roll forward.


Canonical example (PostgreSQL)

-- V001__create_users.sql

CREATE TABLE IF NOT EXISTS users (
    id          BIGSERIAL PRIMARY KEY,
    email       VARCHAR(255) NOT NULL UNIQUE,
    password    TEXT NOT NULL,
    created_at  TIMESTAMP NOT NULL DEFAULT now()
);

Why this is “correct”

  • IF NOT EXISTSidempotent
  • Explicit PK
  • Constraints (NOT NULL, UNIQUE)
  • Reasonable defaults
  • No business logic

Slightly more “senior” version (recommended)

CREATE TABLE IF NOT EXISTS users (
    id          BIGINT GENERATED ALWAYS AS IDENTITY,
    email       VARCHAR(255) NOT NULL,
    password    TEXT NOT NULL,
    status      VARCHAR(32) NOT NULL DEFAULT 'ACTIVE',
    created_at  TIMESTAMP NOT NULL DEFAULT now(),

    CONSTRAINT pk_users PRIMARY KEY (id),
    CONSTRAINT uq_users_email UNIQUE (email)
);

Why this is better

  • Explicit constraint names → clean rollbacks & debugging
  • IDENTITY instead of SERIAL (modern Postgres)
  • No magic behavior

With indexes (common interview follow-up)





CREATE INDEX IF NOT EXISTS idx_users_created_at
    ON users (created_at);

👉 Indexes should usually be created explicitly, not inline.


With comments (production-ready)

COMMENT ON TABLE users IS 'Application users';
COMMENT ON COLUMN users.email IS 'Unique user email';

How this looks in real tools

Flyway

V001__create_users.sql

Liquibase (SQL-based)

001-create-users.sql

Common interview mistakes 🚫

  • ❌ No primary key
  • ❌ Using SERIAL without understanding it
  • ❌ No constraints
  • ❌ Mixing data inserts with schema creation
  • ❌ No idempotency
  • ❌ Using TEXT everywhere

Rollback question (classic trap)

Q: How do you rollback this migration?

Answer:

DROP TABLE users;

Senior addition:

In production we usually prefer forward-only migrations; rollback scripts are optional and often avoided.


One-sentence “gold” answer

A create-table migration is a versioned, idempotent SQL script that defines schema structure, constraints, and indexes without embedding business logic.

Posted in Без рубрики | Leave a comment

Java.DBMigrationTools.How do you test your migrations before deploying?

Short senior-level answer (what to say first)

I test migrations in isolation and in realistic environments.
I run them locally against a clean database, against a database with real historical data, and in CI using ephemeral databases. I also validate rollback behavior, performance on large tables, and backward compatibility with the running application. For risky migrations, I use dry runs, shadow tables, or multi-step migrations.

That’s the headline. Now let’s break it down.


1. Local testing (developer machine)

Goals

  • Catch syntax errors
  • Validate constraints, indexes, FK behavior
  • Check basic correctness

Typical flow

  • Start a fresh DB (Docker)
  • Apply all migrations from scratch
  • Start the application and smoke-test

Example:

docker-compose up postgres
./gradlew flywayMigrate

What I always check:

  • Migration order
  • Idempotency (re-run safety if tool supports it)
  • FKs & indexes created as expected

⚠️ Common mistake:

“I only run migrations on my existing local DB.”
That misses cold-start failures.

2. Testing on real-like data (the most important part)

Why this matters

Most migration bugs appear only with:

  • Large tables
  • Existing NULLs
  • Broken historical data
  • Unexpected constraints violations

What I do

  • Take a sanitized dump from staging / prod
  • Restore locally
  • Run migrations on top of it
pg_restore -d app_db staging_dump.dump
./gradlew flywayMigrate

Things I validate:

  • Migration duration
  • Locks (ALTER TABLE is dangerous)
  • Index creation time
  • Unexpected data failures

💡 Interview tip:

Always mention real data, not only empty schemas.


3. CI testing (automated)

Standard setup

  • Ephemeral DB (Docker / Testcontainers)
  • Migrate from zero
  • Run tests

Example:

  • PostgreSQL container
  • Flyway / Liquibase
  • Integration tests depend on migration success

What CI catches:

  • Broken SQL
  • Missing dependencies
  • Order issues

What CI does not catch:

  • Performance problems on large tables

4. Backward compatibility testing (very senior signal)

This is where many candidates fail.

Question you must answer:

Can the old application version work with the new schema?

Typical techniques:

  • Expand → Migrate → Contract
    1. Add nullable column
    2. Deploy app writing both
    3. Backfill data
    4. Remove old column later
  • Avoid:
    • Dropping columns immediately
    • Making NOT NULL without defaults
    • Renaming columns without compatibility

If you mention this → interviewer knows you’ve done production work.


5. Rollback strategy (trick question)

Truthful senior answer:

I don’t rely on automatic rollback.
I prefer forward-only migrations and compensate with new migrations.

Still, I test:

  • Can I restore from backup?
  • Can I deploy a hotfix migration?
  • Are migrations repeatable / checksum-safe?

Mentioning backups > rollback scripts is a strong signal.


6. Performance & locking checks (Postgres example)

Before production:

  • Check execution plan
  • Check lock type
  • Avoid long ACCESS EXCLUSIVE locks

Techniques:

  • CREATE INDEX CONCURRENTLY
  • Chunked updates
  • Background backfills

Example:

EXPLAIN ANALYZE
UPDATE users SET status = 'ACTIVE';

7. Staging / pre-prod validation

Final gate before prod:

  • Same DB engine & version
  • Same migration tool
  • Same deployment pipeline

What I verify:

  • Migration time window
  • Monitoring alerts
  • Ability to stop rollout

Perfect interview answer (polished, 30–40 seconds)

I test migrations at multiple levels.
Locally, I apply all migrations from scratch to catch ordering and syntax issues. Then I run them against a sanitized copy of real production data to catch data and performance problems. In CI, migrations run automatically on ephemeral databases.
Before production, I validate backward compatibility and avoid destructive changes by using multi-step migrations. I don’t rely on rollbacks; instead, I prepare forward fixes and always ensure backups exist.

Posted in Без рубрики | Comments Off on Java.DBMigrationTools.How do you test your migrations before deploying?

Java.DBMigrationTools.How does optimistic locking behave under high contention, and what failure modes appear?

Short answer (interview-ready)

Under high contention, optimistic locking degrades by producing frequent conflicts, causing retries, increased latency, and potential livelock or starvation.
It stops scaling not by blocking, but by wasting work.

That sentence alone already sounds senior.

Now let’s unpack it properly.


What “high contention” means here

High contention =
Many concurrent transactions update the same rows.

Example:

  • Same account
  • Same inventory item
  • Same counter
  • Hot partition / hot key

What happens under high contention (step by step)

Example with version-based locking

  1. 10 transactions read:
balance = 1000, version = 42

All calculate:

newBalance = 900

First transaction updates:

UPDATE account
SET balance = 900, version = 43
WHERE id = 1 AND version = 42;

✔ success

  1. Remaining 9 transactions execute:
... WHERE version = 42;

❌ 0 rows updated → conflict

  1. They must:
  • reload
  • recompute
  • retry

🔁 Loop repeats.


Failure modes of optimistic locking under contention

1️⃣ Retry storm (most common)

  • Many transactions fail
  • All retry almost simultaneously
  • CPU spikes
  • DB load increases
  • Throughput collapses

📉 More retries ≠ more progress


2️⃣ Livelock

System is:

  • Busy
  • Consuming CPU
  • Doing work

But:

  • Very little actual progress
  • Transactions constantly invalidate each other

💥 System looks alive but makes no progress.


3️⃣ Starvation

  • Some transactions never succeed
  • Faster or luckier clients win repeatedly
  • Others keep failing version checks

Especially dangerous in:

  • Distributed retries
  • Client-side retry loops

4️⃣ Latency explosion

  • Each logical operation becomes:
read → compute → fail → retry → read → compute → fail ...

Tail latency grows massively

SLAs break

5️⃣ Hot-row amplification

  • Every retry re-reads the same hot row
  • Cache churn
  • Lock manager pressure (yes, even without explicit locks)

Why optimistic locking still “works” (important nuance)

Optimistic locking:

  • Preserves correctness
  • Never corrupts data
  • Never deadlocks

But:

  • It fails by inefficiency, not by inconsistency

This is why it’s dangerous in financial or inventory systems.


Contrast with pessimistic locking under contention

AspectOptimisticPessimistic
BehaviorMany retriesBlocking
CPU usageHighLower
DB loadHighPredictable
LatencyUnstableStable
ThroughputDegrades sharplyDegrades smoothly

🔑 Pessimistic locking degrades gracefully.


How experienced systems handle this (very senior)

1️⃣ Switch strategy dynamically

  • Optimistic for normal load
  • Pessimistic for hot paths

2️⃣ Backoff & jitter on retries

retry → sleep(random 10–50ms) → retry

Reduces retry storms.


3️⃣ Partition the hot key

  • Sharded counters
  • Bucketed balances
  • Append-only ledger + async aggregation

4️⃣ Combine with idempotency keys

  • Prevent duplicate logical operations
  • Reduce retries at business level

Typical interview mistakes ❌

  • “Optimistic locking is faster” → ❌ (only under low contention)
  • “Just retry until success” → ❌ (livelock)
  • “Database will handle it” → ❌
  • “Use bigger DB” → ❌

Interview-ready final answer (say this)

Under high contention, optimistic locking causes frequent version conflicts, leading to retries, increased latency, and wasted work.
While correctness is preserved, the system can experience retry storms, livelock, and starvation, causing throughput to collapse.
In such cases, pessimistic locking or redesigning the data model is usually more appropriate.

Posted in Без рубрики | Comments Off on Java.DBMigrationTools.How does optimistic locking behave under high contention, and what failure modes appear?

What is optimistic locking?

Optimistic locking is a concurrency control strategy where:

You assume conflicts are rare, so you don’t lock data upfront.
Instead, you detect conflicts at write time and fail if data was changed by someone else.

Key idea:

  • Read without lock
  • Verify nothing changed
  • Update only if version matches

How optimistic locking works (step by step)

Typical implementation: version column

id | balance | version
---+---------+--------
1  | 1000    | 7

Flow

  1. Read data:
SELECT balance, version FROM account WHERE id = 1;

Client calculates new state (balance - 100)

Update with version check:

UPDATE account
SET balance = 900,
    version = version + 1
WHERE id = 1
  AND version = 7;
  1. Check affected rows:
  • 1 row → success
  • 0 rowsconflict detected

✔ No locks
✔ No blocking
✔ Conflict detected explicitly

Why it’s called optimistic

Because the system is optimistic:

  • Assumes concurrent updates usually won’t collide
  • Pays cost only when conflict happens

Opposite of pessimistic locking.

What happens on conflict?

The application must:

  • Retry
  • Reload data
  • Or return error (409 Conflict in REST)

❗ Database does not retry for you.


Optimistic locking in ORMs (important)

JPA / Hibernate

@Version
private Long version;

Hibernate generates:

UPDATE ... WHERE id = ? AND version = ?

Throws:

  • OptimisticLockException

Optimistic vs pessimistic locking (clear comparison)

AspectOptimisticPessimistic
AssumptionConflicts are rareConflicts are common
LocksNoneExplicit locks
BlockingNoYes
ThroughputHighLower
DeadlocksNoPossible
Conflict handlingDetect & failPrevent
Retry costApplication-levelDB-level wait

When to use optimistic locking

✔ Read-heavy systems
✔ Low contention updates
✔ REST APIs
✔ Microservices
✔ UI-driven edits
✔ Distributed systems

Examples:

  • Editing user profile
  • Updating order metadata
  • Admin panels
  • CMS-like systems

When to use pessimistic locking

✔ High contention
✔ Financial operations
✔ Inventory counters
✔ Sequential workflows
✔ Strong invariants

Examples:

  • Money transfers
  • Stock decrement
  • Seat booking
  • Exactly-once payment logic

Decision rule (interview gold)

If retrying is cheap → optimistic locking
If retrying is dangerous → pessimistic locking


Very senior nuance (bonus)

You often combine them

  1. Optimistic locking for most updates
  2. Pessimistic locking for critical transitions

Example:

  • Order creation → optimistic
  • Payment capture → pessimistic

Typical interview mistakes ❌

  • “Optimistic locking is faster always” → ❌
  • “Pessimistic locking is bad practice” → ❌
  • “Transactions handle everything” → ❌
  • “Just retry automatically” → ❌

Interview-ready answer (say this)

Optimistic locking avoids locking data upfront and detects concurrent modifications using version checks at update time, making it suitable for low-contention, read-heavy systems.
Pessimistic locking locks rows before modification to prevent conflicts and is used when protecting critical invariants like balances or inventory.
The choice depends on contention level and the cost of retries.

Posted in Без рубрики | Comments Off on What is optimistic locking?

Java.DBMigrationTools.What is pessimistic locking, how it works?

What is pessimistic locking?

Pessimistic locking is a concurrency control strategy where:

You assume conflicts will happen, so you lock data before modifying it, preventing others from changing it until you’re done.

In short:

  • “I lock first, then read and write.”
  • Others must wait, fail, or timeout.

Why it’s called pessimistic

Because the system is pessimistic about concurrency:

  • It assumes concurrent access will cause problems
  • So it prevents it upfront, instead of detecting conflicts later

This is the opposite of optimistic locking.


How pessimistic locking works (step by step)

Let’s take a bank account example.

Scenario: withdraw money safely

BEGIN;

SELECT balance
FROM account
WHERE id = 1
FOR UPDATE;

UPDATE account
SET balance = balance - 100
WHERE id = 1;

COMMIT;

What happens internally

  1. Transaction starts
  2. SELECT ... FOR UPDATE
    • Database places a row-level exclusive lock
    • Other transactions:
      • cannot UPDATE
      • cannot DELETE
      • may or may not READ (depends on isolation level)
  3. You safely compute and update state
  4. COMMIT
    • Lock is released
  5. Next waiting transaction proceeds

✔ No lost updates
✔ Invariant preserved (balance >= 0)


What exactly gets locked?

Depends on the database, but typically:

  • Row-level lock (most common)
  • Sometimes gap locks / range locks
  • Rarely table-level locks (bad for throughput)

In PostgreSQL:

  • FOR UPDATE → exclusive row lock
  • FOR SHARE → shared read lock

What happens to other transactions?

If another transaction tries:

UPDATE account SET balance = balance - 50 WHERE id = 1;

It will:

  • Wait until the first transaction commits or rolls back
  • Or timeout
  • Or fail immediately (NOWAIT / SKIP LOCKED)

Variants you should know (interview bonus)

1️⃣ Blocking (default)

SELECT ... FOR UPDATE;

Others wait

2️⃣ Fail fast

SELECT ... FOR UPDATE NOWAIT;
  • If locked → error immediately

3️⃣ Skip locked rows (queues!)

SELECT *
FROM jobs
FOR UPDATE SKIP LOCKED
LIMIT 1;

Used in:

  • Job queues
  • Task schedulers
  • Worker pools

What pessimistic locking is good for

✔ Money transfers
✔ Inventory updates
✔ Counters with invariants
✔ Stateful workflows
✔ Exactly-once financial logic

Basically:

Any place where correctness > throughput


What pessimistic locking is bad for

❌ High-throughput event ingestion
❌ Analytics writes
❌ Append-only logs
❌ Distributed retries
❌ Kafka consumers at scale

Locks don’t scale well under contention.


Pessimistic locking vs transactions (important!)

  • Locking requires a transaction
  • But a transaction does not imply locking
BEGIN;
UPDATE account SET balance = balance - 100;
COMMIT;

⚠ Without explicit locking:

  • Two transactions can overwrite each other
  • Result = lost updates

Typical interview mistakes ❌

  • “Transaction is enough” → ❌
  • “Locks are slow so don’t use them” → ❌
  • “Upsert replaces locking” → ❌
  • “Deadlocks won’t happen” → ❌ (they will)

Interview-ready short answer (say this)

Pessimistic locking is a concurrency control strategy where rows are explicitly locked before modification to prevent concurrent changes.
It works by acquiring row-level locks—typically via SELECT … FOR UPDATE—inside a transaction, blocking other writers until the transaction completes.
It’s used when protecting business invariants like balances or inventory, where correctness is more important than throughput.

Posted in Без рубрики | Comments Off on Java.DBMigrationTools.What is pessimistic locking, how it works?

Java.DBMigrationTools.UpsertVsPessimisticLock

Core idea (one-liner)

  • Upsert“Let the database resolve conflicts automatically.”
  • Pessimistic locking“Prevent conflicts by locking data in advance.”

They solve different problems.


Upsert

What it is

A single atomic write:

  • Insert if row doesn’t exist
  • Update if it does
  • Based on a unique constraint




INSERT INTO orders (id, status)
VALUES (42, 'PAID')
ON CONFLICT (id)
DO UPDATE SET status = EXCLUDED.status;

Characteristics

✅ Lock-free at application level
✅ Naturally idempotent
✅ High throughput
✅ Works great with retries
❌ Limited business logic
❌ “Last write wins” unless carefully designed

Pessimistic locking

What it is

You explicitly lock rows before modifying them.

BEGIN;

SELECT * FROM accounts
WHERE id = 1
FOR UPDATE;

UPDATE accounts
SET balance = balance - 100
WHERE id = 1;

COMMIT;

Characteristics

✅ Strong consistency
✅ Good for state transitions
✅ Prevents lost updates
❌ Lower throughput
❌ Risk of deadlocks
❌ Requires careful transaction management

Side-by-side comparison (interview gold)

AspectUpsertPessimistic Locking
Concurrency modelOptimisticDefensive
IdempotenceNaturalMust be designed
PerformanceHighLower
BlockingMinimalYes
DeadlocksNoPossible
Retry safetyExcellentDangerous
Business logicLimitedComplex allowed

When to use Upsert

✔ Event ingestion
✔ Kafka consumers
✔ External system sync
✔ REST retries
✔ Deduplication
✔ Projections / read models

Example:

INSERT INTO processed_events (event_id)
VALUES ('evt-123')
ON CONFLICT DO NOTHING;

When to use Pessimistic Locking

✔ Money transfers
✔ Inventory decrement
✔ Stateful workflows
✔ Counters with invariants
✔ “Exactly once” financial logic

Example:

SELECT balance FROM account
WHERE id = 1
FOR UPDATE;

Why upsert cannot replace locking

❌ Wrong approach:

UPDATE account SET balance = balance - 100
ON CONFLICT DO UPDATE ...

This still allows double spending under retries.

Locks protect invariants, not existence.


Common real-world pattern (very senior)

Combine both

  1. Upsert to ensure idempotence
  2. Lock only when changing critical state

Example:

Request → idempotency key → upsert request record
→ SELECT ... FOR UPDATE
→ apply state transition

This is how payments systems are built.


Typical interview mistakes ❌

  • “Upsert replaces locks” → ❌
  • “Locks are slower so don’t use them” → ❌
  • “Transactions are enough” → ❌
  • “Just retry on failure” → ❌

Interview-ready answer (say this)

Upsert is an optimistic, lock-free way to insert or update data atomically and is ideal for idempotent writes under retries.
Pessimistic locking explicitly locks rows to protect business invariants and is necessary for state-dependent logic like balances or inventory.
In practice, systems often combine both: upserts for idempotence and locks for critical state transitions.

Posted in Без рубрики | Comments Off on Java.DBMigrationTools.UpsertVsPessimisticLock

Java.DBMigrationTools.What is upsert ?

Upsert = UPDATE + INSERT

An upsert is a database operation that:

  • INSERTS a row if it does not exist
  • UPDATES the existing row if it already exists

👉 It is typically based on a unique key or primary key.


Why upserts exist (say this at interview)

Upserts solve:

  • Duplicate inserts on retries
  • Race conditions between check-then-insert
  • Idempotent writes
  • Concurrent writes in distributed systems

Instead of:

SELECT ...;
IF NOT EXISTS THEN INSERT;

which is unsafe),

you do one atomic operation.


Upserts in different databases

PostgreSQL (most common)

INSERT INTO users (id, email, name)
VALUES (1, 'a@mail.com', 'Stanley')
ON CONFLICT (id)
DO UPDATE SET
    email = EXCLUDED.email,
    name  = EXCLUDED.name;

EXCLUDED = the row you tried to insert

Conflict target must have a unique constraint

Upsert vs plain INSERT (important)

❌ Plain insert:

INSERT INTO orders (id, amount) VALUES (1, 100);

Retry → error or duplicate

✅ Upsert:

INSERT INTO orders (id, amount)
VALUES (1, 100)
ON CONFLICT (id) DO NOTHING;

Retry → safe, idempotent ✔


Common upsert strategies

1. Insert or do nothing

ON CONFLICT (id) DO NOTHING

Used when:

  • You only care about first write
  • Deduplication

2. Insert or overwrite

DO UPDATE SET status = EXCLUDED.status;

Used when:

  • You want last write wins
  • Sync from external system

3. Insert or update conditionally

Very senior-level pattern.


Upserts and idempotence (connect the dots)

Upserts are one of the main tools to achieve idempotence:

RetryResult
First callInsert
Second callUpdate or no-op
Final stateSame

Perfect for:

  • REST POST retries
  • Kafka consumers
  • Payment processing
  • Event sourcing projections

Typical interview pitfalls ❌

  • “Upsert is select-then-insert” → ❌ race condition
  • “Upsert doesn’t need unique index” → ❌ wrong
  • “Transaction makes it safe” → ❌ still unsafe under concurrency

Short interview-ready answer

An upsert is a single atomic database operation that inserts a row if it doesn’t exist or updates it if it does, usually based on a unique constraint.
It’s commonly used to make writes idempotent and safe under retries and concurrency.

Posted in Без рубрики | Comments Off on Java.DBMigrationTools.What is upsert ?

Java.DBMigrationTools.What is idempotence in database changes?

Idempotence in database changes — interview definition

Idempotence means that executing the same database operation multiple times produces the same final state as executing it once.

In other words:
retrying a DB change is safe and does not corrupt data or create duplicates.


Why idempotence matters (very important to say at interview)

Idempotent DB changes are critical because of:

  • Retries (network failures, timeouts, client retries)
  • At-least-once delivery (Kafka, RabbitMQ)
  • Distributed systems
  • Crash recovery
  • Deployments & migrations

💡 If your operation is not idempotent, retries may silently corrupt data.


Simple examples

❌ Non-idempotent operation

INSERT INTO orders (id, amount) VALUES (123, 100);

If executed twice → duplicate row or constraint violation.

✅ Idempotent operation (UPSERT)

INSERT INTO orders (id, amount)
VALUES (123, 100)
ON CONFLICT (id) DO UPDATE
SET amount = EXCLUDED.amount;

First run → inserts

Second run → no state change

Final DB state is the same ✔

Common idempotent patterns in DB

1. Natural or technical idempotency key

CREATE UNIQUE INDEX ux_order_id ON orders(order_id);

Duplicate requests are rejected or merged

Very common in payments, orders, events

2. Upsert instead of insert

MERGE INTO users u
USING (SELECT 1 AS id) s
ON (u.id = s.id)
WHEN NOT MATCHED THEN
  INSERT (id, name) VALUES (1, 'Stanley');

3. State-based updates (not incremental)

❌ Non-idempotent:

UPDATE account SET balance = balance - 100;

✅ Idempotent:

UPDATE account SET balance = 900
WHERE id = 1;

4. Idempotent deletes

DELETE FROM sessions WHERE session_id = 'abc';

Deleting twice → same final state

Delete operations are naturally idempotent

5. Processed-events table (very senior pattern)

INSERT INTO processed_events (event_id)
VALUES ('evt-123')
ON CONFLICT DO NOTHING;

Then:

-- apply business logic only if insert succeeded

Used in:

  • Kafka consumers
  • Outbox / Inbox patterns
  • Exactly-once semantics (emulated)

Idempotence vs transactions (important distinction)

  • Transaction ≠ idempotence
  • A transaction guarantees atomicity
  • Idempotence guarantees safe retries

You usually need both.

Typical interview pitfalls ❌

Candidates often say:

  • “Idempotence means transaction”
  • “Just wrap it in a transaction”
  • “Database will handle duplicates”

❌ Wrong.

Transactions do not protect you from retries.


Short interview-ready answer (say this)

Idempotence in database changes means that applying the same change multiple times results in the same final state as applying it once.
It’s crucial for retries, message-based systems, and distributed environments.
Common techniques include upserts, unique constraints with idempotency keys, state-based updates, and tracking processed events.

Posted in Без рубрики | Comments Off on Java.DBMigrationTools.What is idempotence in database changes?

Java.DBMigrationTools.Can you use the same migration tool for multiple databases?

Flyway

  • ✅ Supported. You can point Flyway at any JDBC URL, so you can run the same migration scripts against multiple databases.
  • Each database keeps its own flyway_schema_history table.
  • You typically run:
flyway -url=jdbc:postgresql://db1:5432/app -user=user -password=pass migrate
flyway -url=jdbc:postgresql://db2:5432/app -user=user -password=pass migrate

Or configure multiple datasources in CI/CD.

Liquibase

  • ✅ Also supported. You just configure different url values in liquibase.properties or pass them via CLI.
  • Each database will maintain its own DATABASECHANGELOG and DATABASECHANGELOGLOCK tables.
  • Example:
liquibase --url="jdbc:mysql://db1:3306/app" update
liquibase --url="jdbc:mysql://db2:3306/app" update

⚠️ Things to watch out for

  1. Separate history tables per DB → each DB tracks migrations independently.
  2. Idempotent scripts → make sure migrations can run cleanly in all environments.
  3. Automation → often wrapped in CI/CD jobs or scripts to apply to multiple databases in sequence.
  4. Schema differences → if your DBs drift, the same migration may fail on one but succeed on another.
Posted in Без рубрики | Comments Off on Java.DBMigrationTools.Can you use the same migration tool for multiple databases?

Java.DBMigrationTools.How do you view migration history?

That depends on the migration tool you’re using. Here are the most common ones:

Flyway

  • Flyway keeps migration history in a table called flyway_schema_history in your database.
  • You can check it by running:
SELECT * FROM flyway_schema_history;

Or from CLI:

flyway info

This shows applied, pending, failed migrations, and checksums.

Liquibase

  • Liquibase stores migration history in a table called DATABASECHANGELOG.
  • To view:
SELECT * FROM databasechangelog;

Or from CLI:

liquibase history
Posted in Без рубрики | Comments Off on Java.DBMigrationTools.How do you view migration history?