Building and testing atprotocol applications
Posted 2024-08-03 12:00 ‐ 6 min read
Developing an ATProtocol Application requires an understanding of the protocol and some infrastructure. This article explains how I create and maintain a development environment while working on Smoke Signal.
Goals
I've made the development environment similar to the production environment with a few minor caveats. I also didn't want to overlook security and protocol.
- All connections to and from Smoke Signal development instances are over SSL.
- All connections to and from support services (PLC and PDS) are over SSL.
- All interactions between Smoke Signal, the PLC, and the PDS are true to protocol.
Additionally, I wanted to be able to create and destroy the world quickly, allowing for fast turnarounds with experimentation.
Domain
The development environment uses the real "pyroclastic.cloud" domain. Two records on that domain are relevant:
A pyroclastic.cloud
resolves to127.0.0.1
A *.pyroclastic.cloud
resolves to127.0.0.1
Having the pyroclastic.cloud domain makes it easy to spin up hostname-based services that resolve locally and have any number of ATProtocol handles services by the local PDS.
It also has the side benefit of breaking loudly if something accidentally leaks. For example, if a DID lookup was made against the live PLC or an external service attempts to look up a handle, it will fail.
Entry
I run Caddy on 80 and 443 as the entry for all web requests. It proxies requests based on hostnames, performs TLS termination, and manages a CA certificate.
{
storage file_system ./caddy/
debug
pki {
ca pyroclastic {
name "Pyroclastic Cloud"
}
}
}
acme.pyroclastic.cloud {
tls {
issuer internal {
ca pyroclastic
}
}
acme_server {
ca pyroclastic
}
}
plc.pyroclastic.cloud {
tls {
issuer internal {
ca pyroclastic
}
}
reverse_proxy http://127.0.0.1:3000
}
smokesignal.pyroclastic.cloud {
tls {
issuer internal {
ca pyroclastic
}
}
reverse_proxy http://127.0.0.1:8080
}
pds.pyroclastic.cloud, *.pyroclastic.cloud {
tls {
issuer internal {
ca pyroclastic
}
}
reverse_proxy http://127.0.0.1:3001
}
There are a few things to learn here:
- I can access the development PLC at
https://plc.pyroclastic.cloud/
via a proxy connection to the service running on port 3000. - I can access the development PDS at
https://pds.pyroclastic.cloud/
via a proxy connection to the service running on port 3001. - I can access the development instance of Smoke Signal at
https://smokesignal.pyroclastic.cloud/
.
All of these configurations use the internal issuer and CA configuration. The root configuration block defines a TLS CA, and Caddy creates certificates for hosts that reference it when it starts.
After all of that, I start Caddy with the following command:
caddy run --config=./Caddyfile
If you are having issues running Caddy on port 80 and 443, you may need to give it privileges:
sudo setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/caddy
PLC
I'm running an instance of did-method-plc and making it available to the PDS and Smoke Signal. I had to do a few things to get it working:
First, I built the docker container locally with:
docker build -f ./packages/server/Dockerfile -t plc .
Then, I created the file docker-compose.yml
at the root of the project:
version: '3.8'
services:
db:
image: postgres:14.4-alpine
network_mode: "host"
environment:
- POSTGRES_USER=pg
- POSTGRES_PASSWORD=password
ports:
- '5433:5432'
healthcheck:
test: 'pg_isready -U pg'
interval: 500ms
timeout: 10s
retries: 20
volumes:
- plc_db:/var/lib/postgresql/data
- ./postgres/init/:/docker-entrypoint-initdb.d/
plc:
depends_on: [db]
image: docker.io/library/plcjs
network_mode: "host"
environment:
- DATABASE_URL=postgres://pg:password@db/plc
- DEBUG_MODE=1
- LOG_ENABLED=true
- LOG_LEVEL=debug
- DB_CREDS_JSON={"username":"pg","password":"password","host":"localhost","port":"5432","database":"plc"}
- DB_MIGRATE_CREDS_JSON={"username":"pg","password":"password","host":"localhost","port":"5432","database":"plc"}
- ENABLE_MIGRATIONS=true
- LOG_DESTINATION=1
ports:
- '2582:2582'
volumes:
plc_db:
Lastly, I created the local directory and file ./postgres/init/init.sql
with the content:
-- plc
CREATE DATABASE plc;
GRANT ALL PRIVILEGES ON DATABASE plc TO pg;
-- bgs
CREATE DATABASE bgs;
CREATE DATABASE carstore;
GRANT ALL PRIVILEGES ON DATABASE bgs TO pg;
GRANT ALL PRIVILEGES ON DATABASE carstore TO pg;
-- bsky(appview)
CREATE DATABASE bsky;
GRANT ALL PRIVILEGES ON DATABASE bsky TO pg;
-- ozone(moderation)
CREATE DATABASE mod;
GRANT ALL PRIVILEGES ON DATABASE mod TO pg;
-- pds
CREATE DATABASE pds;
GRANT ALL PRIVILEGES ON DATABASE pds TO pg;
Once that is in place, you can spin up the PLC service with the following command:
docker compose up
You can verify that it is running with curl:
CURL_CA_BUNDLE=/path/to/caddy/pki/authorities/pyroclastic/root.crt curl https://plc.pyroclastic.cloud/_health
PDS
Running a PDS is the same in development as it is in production with a few differences:
- The PDS needs to know about the CA certificate that Caddy created to make valid HTTPS requests against the PLC.
- The configuration must point to the local PLC instead of the live one.
The PDS has the following environment variables in addition to the standard ones required to run it in the file.pds.env
:
PDS_SERVICE_HANDLE_DOMAINS=.pyroclastic.cloud
PDS_HOSTNAME=pds.pyroclastic.cloud
LOG_ENABLED=true
PDS_DID_PLC_URL=https://plc.pyroclastic.cloud
PDS_ACCEPTING_REPO_IMPORTS=true
DEBUG_MODE=1
LOG_LEVEL=trace
NODE_EXTRA_CA_CERTS=/certs/root.crt
PDS_PORT=3001
I'm using the following command to start the PDS and make sure all of the necessary files are available to it inside of the container:
docker run --network=host -p 3001:3001 --env-file ./.pds.env --mount type=bind,source="$(pwd)"/pds,target=/pds --mount type=bind,source="$(pwd)"/caddy/pki/authorities/pyroclastic/,target=/certs ghcr.io/bluesky-social/pds:0.4
Creating Accounts
The account creation process is similar to the documented one, but I had to make one change to the script to get it working:
# Ensure the user is root, since it's required for most commands.
# if [[ "${EUID}" -ne 0 ]]; then
# echo "ERROR: This script must be run as root"
# exit 1
# fi
I'm not running the script as root and don't need to, so I commented that out.
Now I can create account commands like this:
CURL_CA_BUNDLE=$(pwd)/caddy/pki/authorities/pyroclastic/root.crt PDS_ENV_FILE=$(pwd)/.pds.env ./pdsadmin.sh account create masonthedog@pds.pyroclastic.cloud masonthedog.pyroclastic.cloud
You should see a message that looks like this:
Account created successfully!
-----------------------------
Handle : masonthedog.pyroclastic.cloud
DID : did:plc:qtqslitywl6idlcs4v75jamk
Password : RdeTyDAmFcUUbD8jByUVCeCl
-----------------------------
Save this password, it will not be displayed again.
Conclusion
And that's it! When I start a development session, the first thing I do is start Caddy at the command line, PLC with docker compose up, and PDS with docker run. When I need to start from scratch, I stop and down all services, delete the data volumes, and start everything back up.
I want to draw attention to one crucial detail, though: the docker host network. The PLC and PDS run inside containers, so when a request to the PDS is resolved and connected to the PLC, it would be localhost, but from inside the container. Using the host network allows us to sidestep additional complex networking configuration or have hard-coded hostnames or IPs at the container level.
If you're getting into ATProtocol Application development, I hope this helps.