Skip to Content
DeploymentDatabase Setup

Database Setup

Siyahfy uses PostgreSQL as its primary database and Redis as an optional caching layer. This guide covers installation, configuration, migrations, backups, and common operations.

PostgreSQL Installation

Ubuntu/Debian

# Install PostgreSQL 16 sudo apt update sudo apt install -y postgresql postgresql-contrib # Start and enable the service sudo systemctl start postgresql sudo systemctl enable postgresql # Verify installation psql --version sudo systemctl status postgresql

macOS (for development)

# Using Homebrew brew install postgresql@16 brew services start postgresql@16

Windows (for development)

Download the installer from postgresql.org  or use the command line:

# Using winget winget install PostgreSQL.PostgreSQL.16 # Using Chocolatey choco install postgresql16

Database Creation and User Setup

Create the Database

# Switch to the postgres user sudo -u postgres psql # Create the database CREATE DATABASE siyahfy; # Verify \l

In development, using the default postgres user is fine. For production, create a dedicated user with limited privileges:

-- Connect as the postgres superuser sudo -u postgres psql -- Create the application user CREATE USER siyahfy_app WITH PASSWORD 'your_strong_password_here'; -- Grant privileges on the database GRANT ALL PRIVILEGES ON DATABASE siyahfy TO siyahfy_app; -- Connect to the siyahfy database \c siyahfy -- Grant schema privileges GRANT ALL ON SCHEMA public TO siyahfy_app; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON TABLES TO siyahfy_app; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON SEQUENCES TO siyahfy_app; -- Verify \du

Then update your environment variables:

# backend.siyahfy.com/.env DB_USER=siyahfy_app DB_HOST=localhost DB_DATABASE=siyahfy DB_PASSWORD=your_strong_password_here DB_PORT=5432

PostgreSQL Configuration for Production

Edit postgresql.conf (usually at /etc/postgresql/16/main/postgresql.conf):

# Connection settings listen_addresses = 'localhost' # Only accept local connections max_connections = 100 # Memory settings (adjust based on available RAM) shared_buffers = 256MB # ~25% of available RAM effective_cache_size = 768MB # ~75% of available RAM work_mem = 4MB maintenance_work_mem = 128MB # Logging log_statement = 'ddl' # Log DDL statements log_min_duration_statement = 1000 # Log queries taking more than 1 second

Restart PostgreSQL after changes:

sudo systemctl restart postgresql

Migration System

Siyahfy uses an automatic migration system built into the backend. Migrations run every time the backend server starts.

How It Works

  1. On startup, backend.siyahfy.com/config/index.js calls runMigrations()
  2. The function creates a _migrations tracking table if it does not exist:
CREATE TABLE IF NOT EXISTS _migrations ( id SERIAL PRIMARY KEY, filename VARCHAR(255) UNIQUE NOT NULL, executed_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP );
  1. It reads all .sql and .js files from backend.siyahfy.com/migrations/
  2. Files are sorted alphabetically and executed in order
  3. Each file is checked against the _migrations table — if a record exists, it is skipped
  4. After successful execution, the filename is inserted into _migrations

Migration Files

Migration files live in backend.siyahfy.com/migrations/. Current migrations include:

FilePurpose
add-custom-fonts.sqlCustom font storage for themes
add-draft-config.sqlDraft configuration support
add-draft-files.sqlDraft file management
add-pickup-addresses.sqlPickup address support for orders
add-price-usd.sqlUSD pricing column
add-signup-tracking.sqlSignup funnel analytics
add-theme-name-column.sqlTheme name metadata
app-credits.sqlCredit/billing system tables
create_whatsapp_marketing_settings.sqlWhatsApp campaign settings
create-pages-table.sqlCustom page builder tables
create-refresh-tokens.sqlJWT refresh token storage
create-store-banners.sqlStorefront banner management
create-store-menus.sqlStore navigation menus
create-store-templates.sqlStore template presets
create-theme-history-table.sqlTheme version history
create-theme-tables.sqlTheme marketplace tables
developer-apps.sqlDeveloper app registry
developer-bank-details.sqlDeveloper payout information
developer-earnings.sqlDeveloper earnings tracking
inventory-system-v2.sqlInventory management (v2)
migrate-inventory-data.jsData migration script for inventory
payment-gateway-features.sqlPayment gateway configuration
prefilled-chat.sqlPre-filled chat templates
prefilled-chat-conversations.sqlPre-filled chat conversation data
signup-funnel.sqlSignup funnel tracking tables
studio-developers.sqlStudio developer profiles
theme-marketplace.sqlTheme marketplace listings
theme-terminate.sqlTheme deactivation support
vendor-credits.sqlVendor credit system

Writing New Migrations

To add a new migration:

  1. Create a .sql file in backend.siyahfy.com/migrations/
  2. Use a descriptive, kebab-case filename
  3. The migration runs automatically on the next server start
-- backend.siyahfy.com/migrations/add-coupon-codes.sql CREATE TABLE IF NOT EXISTS coupon_codes ( id SERIAL PRIMARY KEY, store_id INTEGER REFERENCES stores(id) ON DELETE CASCADE, code VARCHAR(50) NOT NULL, discount_type VARCHAR(20) NOT NULL CHECK (discount_type IN ('percentage', 'fixed')), discount_value DECIMAL(10,2) NOT NULL, min_order_amount DECIMAL(10,2) DEFAULT 0, max_uses INTEGER, used_count INTEGER DEFAULT 0, expires_at TIMESTAMP, is_active BOOLEAN DEFAULT true, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); CREATE INDEX idx_coupon_codes_store ON coupon_codes(store_id); CREATE UNIQUE INDEX idx_coupon_codes_unique ON coupon_codes(store_id, code);

For JavaScript migrations (data transformations), export an async function:

// backend.siyahfy.com/migrations/transform-data-example.js const pool = require('../config'); module.exports = async function () { const client = await pool.connect(); try { await client.query('BEGIN'); // ... perform data transformations await client.query('COMMIT'); } catch (err) { await client.query('ROLLBACK'); throw err; } finally { client.release(); } };

Checking Migration Status

-- Connect to the database psql -U postgres -d siyahfy -- View all executed migrations SELECT filename, executed_at FROM _migrations ORDER BY executed_at; -- Check if a specific migration has run SELECT * FROM _migrations WHERE filename = 'create-theme-tables.sql';

Backup Strategies

Manual Backup with pg_dump

# Full database backup (compressed) pg_dump -U postgres -d siyahfy -Fc -f /backups/siyahfy_$(date +%Y%m%d_%H%M%S).dump # SQL format backup (human-readable) pg_dump -U postgres -d siyahfy > /backups/siyahfy_$(date +%Y%m%d_%H%M%S).sql # Backup specific tables only pg_dump -U postgres -d siyahfy -t stores -t products -t orders \ -Fc -f /backups/siyahfy_tables_$(date +%Y%m%d_%H%M%S).dump # Schema-only backup (no data) pg_dump -U postgres -d siyahfy --schema-only > /backups/siyahfy_schema.sql

Automated Daily Backups

Create a backup script and schedule it with cron:

#!/bin/bash # /var/www/siyahfy/backup-db.sh BACKUP_DIR="/backups/postgresql" DB_NAME="siyahfy" DB_USER="postgres" RETENTION_DAYS=30 # Create backup directory mkdir -p "$BACKUP_DIR" # Create backup FILENAME="$BACKUP_DIR/${DB_NAME}_$(date +%Y%m%d_%H%M%S).dump" pg_dump -U "$DB_USER" -d "$DB_NAME" -Fc -f "$FILENAME" # Compress if SQL format # gzip "$FILENAME" # Remove backups older than retention period find "$BACKUP_DIR" -name "${DB_NAME}_*.dump" -mtime +$RETENTION_DAYS -delete echo "Backup completed: $FILENAME" echo "Size: $(du -sh $FILENAME | cut -f1)"
# Make executable chmod +x /var/www/siyahfy/backup-db.sh # Schedule daily backup at 2 AM crontab -e # Add this line: 0 2 * * * /var/www/siyahfy/backup-db.sh >> /var/log/siyahfy-backup.log 2>&1

Off-Site Backups

For production, copy backups to a remote location:

# Sync backups to a remote server rsync -avz /backups/postgresql/ user@backup-server:/backups/siyahfy/ # Or upload to Backblaze B2 / Cloudflare R2 aws s3 cp "$FILENAME" s3://your-backup-bucket/postgresql/ \ --endpoint-url https://s3.us-east-005.backblazeb2.com

Redis Setup

Redis is optional but recommended for session management and caching. The backend starts without it, but caching features will be disabled.

Installation

# Ubuntu/Debian sudo apt install -y redis-server # Start and enable sudo systemctl start redis-server sudo systemctl enable redis-server # Verify redis-cli ping # Should return PONG

Configuration

Edit /etc/redis/redis.conf:

# Bind to localhost only bind 127.0.0.1 # Set a password (recommended for production) requirepass your_redis_password # Memory limit maxmemory 256mb maxmemory-policy allkeys-lru # Persistence save 900 1 save 300 10 save 60 10000

Restart Redis after changes:

sudo systemctl restart redis-server

Environment Variable

Update backend.siyahfy.com/.env:

# Without password REDIS_URL=redis://localhost:6379 # With password REDIS_URL=redis://:your_redis_password@localhost:6379

Common Database Operations

Reset the Database

Drops all data and re-runs migrations on the next backend start:

# Connect to PostgreSQL sudo -u postgres psql # Drop and recreate the database DROP DATABASE siyahfy; CREATE DATABASE siyahfy; # Restart the backend to trigger migrations pm2 restart backend

Restore from Backup

# From a compressed dump file pg_restore -U postgres -d siyahfy --clean --if-exists /backups/siyahfy_20260101_020000.dump # From a SQL file psql -U postgres -d siyahfy < /backups/siyahfy_20260101_020000.sql # Restore to a fresh database sudo -u postgres psql -c "DROP DATABASE IF EXISTS siyahfy;" sudo -u postgres psql -c "CREATE DATABASE siyahfy;" pg_restore -U postgres -d siyahfy /backups/siyahfy_20260101_020000.dump

Connect to the Database

# Using psql psql -U postgres -d siyahfy # Useful psql commands \dt # List all tables \d+ stores # Describe a table with details \di # List all indexes \df # List all functions \x # Toggle expanded display

Check Database Size

-- Total database size SELECT pg_size_pretty(pg_database_size('siyahfy')); -- Table sizes SELECT tablename, pg_size_pretty(pg_total_relation_size(schemaname || '.' || tablename)) AS total_size, pg_size_pretty(pg_relation_size(schemaname || '.' || tablename)) AS data_size FROM pg_tables WHERE schemaname = 'public' ORDER BY pg_total_relation_size(schemaname || '.' || tablename) DESC; -- Row counts per table SELECT schemaname, relname AS table_name, n_live_tup AS row_count FROM pg_stat_user_tables ORDER BY n_live_tup DESC;

Troubleshooting

Connection Refused

# Check if PostgreSQL is running sudo systemctl status postgresql # Check which port it's listening on sudo ss -tlnp | grep postgres # Check pg_hba.conf for authentication rules sudo cat /etc/postgresql/16/main/pg_hba.conf

Too Many Connections

-- Check current connections SELECT count(*) FROM pg_stat_activity; -- View active connections by application SELECT application_name, count(*) FROM pg_stat_activity GROUP BY application_name; -- Terminate idle connections SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE state = 'idle' AND query_start < now() - interval '10 minutes';

Slow Queries

-- Enable query logging (in postgresql.conf) -- log_min_duration_statement = 500 (log queries > 500ms) -- Find slow queries in the log -- sudo tail -f /var/log/postgresql/postgresql-16-main.log -- Check for missing indexes SELECT relname AS table_name, seq_scan, seq_tup_read, idx_scan, idx_tup_fetch FROM pg_stat_user_tables WHERE seq_scan > idx_scan ORDER BY seq_tup_read DESC LIMIT 10;