Database Setup
Siyahfy uses PostgreSQL as its primary database and Redis as an optional caching layer. This guide covers installation, configuration, migrations, backups, and common operations.
PostgreSQL Installation
Ubuntu/Debian
# Install PostgreSQL 16
sudo apt update
sudo apt install -y postgresql postgresql-contrib
# Start and enable the service
sudo systemctl start postgresql
sudo systemctl enable postgresql
# Verify installation
psql --version
sudo systemctl status postgresqlmacOS (for development)
# Using Homebrew
brew install postgresql@16
brew services start postgresql@16Windows (for development)
Download the installer from postgresql.org or use the command line:
# Using winget
winget install PostgreSQL.PostgreSQL.16
# Using Chocolatey
choco install postgresql16Database Creation and User Setup
Create the Database
# Switch to the postgres user
sudo -u postgres psql
# Create the database
CREATE DATABASE siyahfy;
# Verify
\lCreate a Dedicated User (Recommended for Production)
In development, using the default postgres user is fine. For production, create a dedicated user with limited privileges:
-- Connect as the postgres superuser
sudo -u postgres psql
-- Create the application user
CREATE USER siyahfy_app WITH PASSWORD 'your_strong_password_here';
-- Grant privileges on the database
GRANT ALL PRIVILEGES ON DATABASE siyahfy TO siyahfy_app;
-- Connect to the siyahfy database
\c siyahfy
-- Grant schema privileges
GRANT ALL ON SCHEMA public TO siyahfy_app;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON TABLES TO siyahfy_app;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON SEQUENCES TO siyahfy_app;
-- Verify
\duThen update your environment variables:
# backend.siyahfy.com/.env
DB_USER=siyahfy_app
DB_HOST=localhost
DB_DATABASE=siyahfy
DB_PASSWORD=your_strong_password_here
DB_PORT=5432PostgreSQL Configuration for Production
Edit postgresql.conf (usually at /etc/postgresql/16/main/postgresql.conf):
# Connection settings
listen_addresses = 'localhost' # Only accept local connections
max_connections = 100
# Memory settings (adjust based on available RAM)
shared_buffers = 256MB # ~25% of available RAM
effective_cache_size = 768MB # ~75% of available RAM
work_mem = 4MB
maintenance_work_mem = 128MB
# Logging
log_statement = 'ddl' # Log DDL statements
log_min_duration_statement = 1000 # Log queries taking more than 1 secondRestart PostgreSQL after changes:
sudo systemctl restart postgresqlMigration System
Siyahfy uses an automatic migration system built into the backend. Migrations run every time the backend server starts.
How It Works
- On startup,
backend.siyahfy.com/config/index.jscallsrunMigrations() - The function creates a
_migrationstracking table if it does not exist:
CREATE TABLE IF NOT EXISTS _migrations (
id SERIAL PRIMARY KEY,
filename VARCHAR(255) UNIQUE NOT NULL,
executed_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);- It reads all
.sqland.jsfiles frombackend.siyahfy.com/migrations/ - Files are sorted alphabetically and executed in order
- Each file is checked against the
_migrationstable — if a record exists, it is skipped - After successful execution, the filename is inserted into
_migrations
Migration Files
Migration files live in backend.siyahfy.com/migrations/. Current migrations include:
| File | Purpose |
|---|---|
add-custom-fonts.sql | Custom font storage for themes |
add-draft-config.sql | Draft configuration support |
add-draft-files.sql | Draft file management |
add-pickup-addresses.sql | Pickup address support for orders |
add-price-usd.sql | USD pricing column |
add-signup-tracking.sql | Signup funnel analytics |
add-theme-name-column.sql | Theme name metadata |
app-credits.sql | Credit/billing system tables |
create_whatsapp_marketing_settings.sql | WhatsApp campaign settings |
create-pages-table.sql | Custom page builder tables |
create-refresh-tokens.sql | JWT refresh token storage |
create-store-banners.sql | Storefront banner management |
create-store-menus.sql | Store navigation menus |
create-store-templates.sql | Store template presets |
create-theme-history-table.sql | Theme version history |
create-theme-tables.sql | Theme marketplace tables |
developer-apps.sql | Developer app registry |
developer-bank-details.sql | Developer payout information |
developer-earnings.sql | Developer earnings tracking |
inventory-system-v2.sql | Inventory management (v2) |
migrate-inventory-data.js | Data migration script for inventory |
payment-gateway-features.sql | Payment gateway configuration |
prefilled-chat.sql | Pre-filled chat templates |
prefilled-chat-conversations.sql | Pre-filled chat conversation data |
signup-funnel.sql | Signup funnel tracking tables |
studio-developers.sql | Studio developer profiles |
theme-marketplace.sql | Theme marketplace listings |
theme-terminate.sql | Theme deactivation support |
vendor-credits.sql | Vendor credit system |
Writing New Migrations
To add a new migration:
- Create a
.sqlfile inbackend.siyahfy.com/migrations/ - Use a descriptive, kebab-case filename
- The migration runs automatically on the next server start
-- backend.siyahfy.com/migrations/add-coupon-codes.sql
CREATE TABLE IF NOT EXISTS coupon_codes (
id SERIAL PRIMARY KEY,
store_id INTEGER REFERENCES stores(id) ON DELETE CASCADE,
code VARCHAR(50) NOT NULL,
discount_type VARCHAR(20) NOT NULL CHECK (discount_type IN ('percentage', 'fixed')),
discount_value DECIMAL(10,2) NOT NULL,
min_order_amount DECIMAL(10,2) DEFAULT 0,
max_uses INTEGER,
used_count INTEGER DEFAULT 0,
expires_at TIMESTAMP,
is_active BOOLEAN DEFAULT true,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_coupon_codes_store ON coupon_codes(store_id);
CREATE UNIQUE INDEX idx_coupon_codes_unique ON coupon_codes(store_id, code);For JavaScript migrations (data transformations), export an async function:
// backend.siyahfy.com/migrations/transform-data-example.js
const pool = require('../config');
module.exports = async function () {
const client = await pool.connect();
try {
await client.query('BEGIN');
// ... perform data transformations
await client.query('COMMIT');
} catch (err) {
await client.query('ROLLBACK');
throw err;
} finally {
client.release();
}
};Checking Migration Status
-- Connect to the database
psql -U postgres -d siyahfy
-- View all executed migrations
SELECT filename, executed_at FROM _migrations ORDER BY executed_at;
-- Check if a specific migration has run
SELECT * FROM _migrations WHERE filename = 'create-theme-tables.sql';Backup Strategies
Manual Backup with pg_dump
# Full database backup (compressed)
pg_dump -U postgres -d siyahfy -Fc -f /backups/siyahfy_$(date +%Y%m%d_%H%M%S).dump
# SQL format backup (human-readable)
pg_dump -U postgres -d siyahfy > /backups/siyahfy_$(date +%Y%m%d_%H%M%S).sql
# Backup specific tables only
pg_dump -U postgres -d siyahfy -t stores -t products -t orders \
-Fc -f /backups/siyahfy_tables_$(date +%Y%m%d_%H%M%S).dump
# Schema-only backup (no data)
pg_dump -U postgres -d siyahfy --schema-only > /backups/siyahfy_schema.sqlAutomated Daily Backups
Create a backup script and schedule it with cron:
#!/bin/bash
# /var/www/siyahfy/backup-db.sh
BACKUP_DIR="/backups/postgresql"
DB_NAME="siyahfy"
DB_USER="postgres"
RETENTION_DAYS=30
# Create backup directory
mkdir -p "$BACKUP_DIR"
# Create backup
FILENAME="$BACKUP_DIR/${DB_NAME}_$(date +%Y%m%d_%H%M%S).dump"
pg_dump -U "$DB_USER" -d "$DB_NAME" -Fc -f "$FILENAME"
# Compress if SQL format
# gzip "$FILENAME"
# Remove backups older than retention period
find "$BACKUP_DIR" -name "${DB_NAME}_*.dump" -mtime +$RETENTION_DAYS -delete
echo "Backup completed: $FILENAME"
echo "Size: $(du -sh $FILENAME | cut -f1)"# Make executable
chmod +x /var/www/siyahfy/backup-db.sh
# Schedule daily backup at 2 AM
crontab -e
# Add this line:
0 2 * * * /var/www/siyahfy/backup-db.sh >> /var/log/siyahfy-backup.log 2>&1Off-Site Backups
For production, copy backups to a remote location:
# Sync backups to a remote server
rsync -avz /backups/postgresql/ user@backup-server:/backups/siyahfy/
# Or upload to Backblaze B2 / Cloudflare R2
aws s3 cp "$FILENAME" s3://your-backup-bucket/postgresql/ \
--endpoint-url https://s3.us-east-005.backblazeb2.comRedis Setup
Redis is optional but recommended for session management and caching. The backend starts without it, but caching features will be disabled.
Installation
# Ubuntu/Debian
sudo apt install -y redis-server
# Start and enable
sudo systemctl start redis-server
sudo systemctl enable redis-server
# Verify
redis-cli ping # Should return PONGConfiguration
Edit /etc/redis/redis.conf:
# Bind to localhost only
bind 127.0.0.1
# Set a password (recommended for production)
requirepass your_redis_password
# Memory limit
maxmemory 256mb
maxmemory-policy allkeys-lru
# Persistence
save 900 1
save 300 10
save 60 10000Restart Redis after changes:
sudo systemctl restart redis-serverEnvironment Variable
Update backend.siyahfy.com/.env:
# Without password
REDIS_URL=redis://localhost:6379
# With password
REDIS_URL=redis://:your_redis_password@localhost:6379Common Database Operations
Reset the Database
Drops all data and re-runs migrations on the next backend start:
# Connect to PostgreSQL
sudo -u postgres psql
# Drop and recreate the database
DROP DATABASE siyahfy;
CREATE DATABASE siyahfy;
# Restart the backend to trigger migrations
pm2 restart backendRestore from Backup
# From a compressed dump file
pg_restore -U postgres -d siyahfy --clean --if-exists /backups/siyahfy_20260101_020000.dump
# From a SQL file
psql -U postgres -d siyahfy < /backups/siyahfy_20260101_020000.sql
# Restore to a fresh database
sudo -u postgres psql -c "DROP DATABASE IF EXISTS siyahfy;"
sudo -u postgres psql -c "CREATE DATABASE siyahfy;"
pg_restore -U postgres -d siyahfy /backups/siyahfy_20260101_020000.dumpConnect to the Database
# Using psql
psql -U postgres -d siyahfy
# Useful psql commands
\dt # List all tables
\d+ stores # Describe a table with details
\di # List all indexes
\df # List all functions
\x # Toggle expanded displayCheck Database Size
-- Total database size
SELECT pg_size_pretty(pg_database_size('siyahfy'));
-- Table sizes
SELECT
tablename,
pg_size_pretty(pg_total_relation_size(schemaname || '.' || tablename)) AS total_size,
pg_size_pretty(pg_relation_size(schemaname || '.' || tablename)) AS data_size
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname || '.' || tablename) DESC;
-- Row counts per table
SELECT
schemaname,
relname AS table_name,
n_live_tup AS row_count
FROM pg_stat_user_tables
ORDER BY n_live_tup DESC;Troubleshooting
Connection Refused
# Check if PostgreSQL is running
sudo systemctl status postgresql
# Check which port it's listening on
sudo ss -tlnp | grep postgres
# Check pg_hba.conf for authentication rules
sudo cat /etc/postgresql/16/main/pg_hba.confToo Many Connections
-- Check current connections
SELECT count(*) FROM pg_stat_activity;
-- View active connections by application
SELECT application_name, count(*)
FROM pg_stat_activity
GROUP BY application_name;
-- Terminate idle connections
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE state = 'idle'
AND query_start < now() - interval '10 minutes';Slow Queries
-- Enable query logging (in postgresql.conf)
-- log_min_duration_statement = 500 (log queries > 500ms)
-- Find slow queries in the log
-- sudo tail -f /var/log/postgresql/postgresql-16-main.log
-- Check for missing indexes
SELECT
relname AS table_name,
seq_scan,
seq_tup_read,
idx_scan,
idx_tup_fetch
FROM pg_stat_user_tables
WHERE seq_scan > idx_scan
ORDER BY seq_tup_read DESC
LIMIT 10;