Back to home

Prysma Sentinel Guide

v2 — Tactical map, geofences, fleet chat, mobile app, and alerts

Introduction

What is Prysma Sentinel v2?

Prysma Sentinel is a field-operations platform built for mission-critical work: fleet visibility on a tactical map, contextual SOS alerts, geofences with per-zone rules, persistent per-unit chat, and unified history.

Version 2 ties the physical ecosystem (LoRa nodes and gateway) to the SaaS control plane and the Sentinel mobile app—one source of truth for admins and operators.

Organization and access

Org ID and first-time setup

Each organization has an Org ID that links the web dashboard, the database, and the Sentinel mobile app.

On the fleet map you can copy the Org ID from the dedicated control and paste it into the app so devices enroll under the correct tenant.

  • Treat the Org ID as operational configuration; do not publish it broadly for sensitive deployments.
  • If your admin workflow uses org impersonation, always confirm which org is active before editing geofences or billing.

Language, guide, and command palette

This guide is available in Spanish and English—use /es/guide or /en/guide, or switch locale from the landing experience.

From the fleet hub you can open the guide, chat, and other views from the side rail, hamburger menu, or command palette (quick search for actions).

Hardware

Heltec V4 — power on and pairing

Sentinel nodes are built on Heltec V4–class boards (LoRa radio, GPS, and battery). After power-on, the device follows your organization’s pairing policy.

  • Flash the Prysma-provided firmware or binary before first deployment.
  • Register the org in the dashboard and copy the Org ID into the Sentinel mobile app.
  • Bring the node within range of the gateway or mesh per your coverage plan.
  • Confirm on the tactical map that the node shows as nominal before field use.

Tactical map

HUD at a glance

The map uses the Sentinel Dark basemap: deep background, subtle roads, and cyan mesh lines from the gateway to visible nodes.

Corners show cursor coordinates, gateway status, active node count, and a short tactical event log.

Node colors

Circular markers with an animated ring show near-real-time position. Cyan: nominal. Emerald: operator on duty (duty on) reported from the mobile app. Orange with a faster pulse: tactical alert or simulated SOS for training.

Geofences on the map

Zones you define in the geofence studio render as overlays on the fleet map (by variant: operations or restricted). That gives you immediate context for rules while tracking units.

When a unit enters or leaves a zone, the system can record the transition in the map intelligence stream (entry/exit messages including unit and zone names).

Mesh visualization

Glowing lines connect the hub (gateway) to each displayed node. They are a logical deployment guide in this view—not a live RF link measurement until link telemetry is wired in.

Geofences (studio)

What they are for

Geofences delimit areas for operations or access control. They are stored per organization and power the web map, dashboard alert logic, and the mobile app (sync and in-field notices).

  • Operations: working area or tactical interest (visibility and map context).
  • Restricted: area where operator entry should raise a notice (mobile notification and monitoring from the operations center).

Creating and editing zones

Open the geofence studio from the fleet menu (path /fleet/geofences). There you can draw new shapes on the map, set name and variant, and save.

Circle and polygon shapes are supported as provided by the editor—adjust vertices or radius until the zone matches the real perimeter.

  • Save after meaningful edits; if you see database errors, confirm Supabase migrations are applied for the environment you use (local or cloud).
  • Name zones for operations clarity (e.g., north perimeter, authorized personnel only) so log lines and mobile copy stay readable.

Mobile app and API

The app pulls your organization’s zone list from the mobile geofences endpoint (authenticated with the same mobile JWT as the rest of the mobile API).

Remote config (/api/mobile/config) exposes features.geofences: when disabled, the client does not apply geofence rules even if zones exist server-side.

Sentinel mobile app

Remote feature flags

After sign-in, the app reads backend configuration to learn which modules are enabled for your deployment.

  • SOS: when features.sos is off, the emergency flow is hidden or disabled per client version.
  • Geofences: when features.geofences is on and location permission is granted, the app evaluates position against restricted zones and may notify on entry.

Geofences in the field

Restricted zones are meant to warn the operator at the moment; operations zones mainly add context on the web map.

On-device evaluation uses geometry synced from the server—keep a good GPS fix and open the app with network at least once to refresh zones.

Missions on dashboard and mobile

Administrators create per-node missions in the hub (missions drawer): title, description, and status. When assigned, operators can open them under the Missions tab in the mobile app if the feature flag is enabled in remote config.

Operators can mark a mission completed from the phone; the update is stored in Supabase and reflected for the team in the web dashboard.

Inactivity watchdog (dead man switch)

Sentinel tracks each node’s last valid GPS fix in the database (`last_gps_fix_at`). When telemetry arrives without fresh coordinates and the previous fix is older than the configured threshold (`SENTINEL_DEAD_MAN_MINUTES`, default 30 minutes), the backend may insert a warning row in the fleet event log.

This is an operational dead man switch: it does not replace an explicit SOS, but helps surface units that stopped reporting position for too long while still sending other payloads.

  • Threshold is per deployment via the Next.js server environment.
  • Warnings are deduplicated per node (~2 h) to avoid flooding history.

Fleet chat

Per-unit threads

Fleet chat (/fleet/chat) centralizes messaging between the operations center and each identified node.

Messages are tied to organization and node id, so threads remain coherent even if the handheld operator changes, as long as the hardware keeps the same node id.

  • Dashboard users send as admin; field-originated messages appear as node in history.
  • Use chat for short coordination; for critical incidents still follow voice protocol and the dashboard SOS flow.

Alerts

Handling an SOS

When an operator triggers SOS from a device or the app, the dashboard raises a high-priority alert and may focus the map on the last known fix.

Use the unit detail panel to review identity, last-seen time, and clear tactical alerts when appropriate. Platform-level SOS may use the hub acknowledgement flow.

  • Use voice or internal protocol before closing the incident in software.
  • Record closure per your standard operating procedures.

Geofence-related alerts and events

On the web map, unit transitions into or out of a geofence can surface in the tactical log and in unit intelligence copy.

On mobile, entering a restricted zone may trigger a local notification for the operator; behavior depends on location permission and features.geofences.

Account, team, and billing

Profile, settings, and team

From the fleet menu you can open your profile, experience settings (map theme, UI preferences), and the team view when your role allows.

Node display aliases help recognize units by friendly names on the map and panels—managed via hardware or alias admin flows depending on deployment.

Billing and plan limits

If your org has billing or quota limits, the dashboard may show banners about payment status or usage nearing caps.

Use the billing section in the menu for subscription details; for support, include Org ID and admin email.

Data export

Depending on plan and permissions, fleet data export (CSV) may be available from the hub for external analysis or archival.

Server-side history retention affects how long certain events remain available to review or export.

Use cases

SOS case: when the alarm fires

When an SOS alarm fires, the operations center must act in seconds: locate the operator, dispatch a response mission, and notify the team through the agreed channels.

In the web hub, the map can focus the unit and the detail panel shows recent context. The tactical log records the incident for later review.

  • See: open the map and unit sheet; confirm identity and last position.
  • Dispatch mission: from Missions assign a task to the node or a response crew.
  • Notify: use fleet chat or voice protocol per your SOP; do not close the incident in software without operational sign-off.
+-------------+     +-----------------+     +------------------+
   | SOS triggers| --> | Map + unit card | --> | Mission + alerts |
   | (app/node)  |     | View / focus    |     | Chat / protocol  |
   +-------------+     +-----------------+     +------------------+
          |                       |
          +-----------+-----------+
                      v
             Tactical log (audit)

Logistics case: heatmap and mileage

To optimize routes and coverage, combine the map heat layer with distance reports: heat shows where historical activity clustered; reports quantify travel over a time window.

Use both to rebalance bases, adjust shifts, or validate SLAs with stakeholders.

Historical GPS activity    Mileage / trail report
         |                           |
  +-------------+             +---------------+
  |  Heatmap    | --compare-> | Mileage       |
  | (density)   |   trends    |   per unit    |
  +-------------+             +---------------+
         |                           |
         +-------------+-------------+
                       v
              Route + shift tuning

Safety case: Dead Man Switch (lone worker)

For isolated work, a Dead Man Switch (DMS) requires periodic proof the operator is OK. If check-in is late, the system can escalate to supervisors or spawn a welfare mission.

Configure intervals and contacts to match site risk; align with legal and HSE before enabling aggressive policies.

Operator in field
  +---------------------+
  | Periodic check-in   +---- DMS timer
  +----------+----------+
             |
       on time? ---- no
        |              |
        v              v
     Normal        Escalation (alert / mission)

Synchronization

Live data and KV

The dashboard syncs via periodic snapshots and, when enabled, an event stream. Node aliases and retention policies are enforced server-side.

The tactical simulator (lab only) generates positions on the server and merges with real nodes for UI testing without field hardware.

Maintenance and testing (SuperAdmin)

Running the test suite

Sentinel ships with a Vitest unit-test suite covering the ingest engine (GPS processing, SOS detection) and SaaS logic (per-plan node limits). Run the tests before any deploy to verify the pipeline works end-to-end.

  • npm test — runs all tests in single-run mode (CI-friendly).
  • npm run test:watch — interactive mode with hot-reload for development.
  • Tests mock Supabase and KV; no external services are required to pass.
cd web-dashboard
  npm test

  ✓ Ingest Engine – GPS processing (5 tests)
  ✓ Ingest Engine – SOS alert trigger (4 tests)
  ✓ Ingest Engine – Supabase upsert (2 tests)
  ✓ SaaS plan enforcement – node limit (3 tests)

System logs (sentinel_system_logs)

Critical /api/fleet/** endpoints are wrapped with an error-tracking guard. Any unhandled exception is automatically recorded in the sentinel_system_logs Supabase table, including level (info/warn/error/fatal), source, message, JSON metadata, and stack trace.

Query the table from the Supabase dashboard or with raw SQL for SuperAdmin auditing. Records include org_id when available so you can filter by organization.

  • Level error: unhandled exception in a fleet endpoint.
  • Level warn: service degradation (slow Supabase, KV timeout).
  • Level fatal: critical system failure requiring intervention.
  • If Supabase is unavailable, logs are kept in a memory buffer and printed to stderr.
SELECT level, source, message, created_at
  FROM sentinel_system_logs
  WHERE level = 'error'
  ORDER BY created_at DESC
  LIMIT 20;

Advanced health check

The GET /api/health endpoint verifies Supabase connectivity and KV/Redis queue status. It returns JSON with each service's state, connection latency, and process uptime.

Use this endpoint in your external monitoring stack (UptimeRobot, Checkly, etc.) to detect degradations before they impact users.

  • HTTP 200 + status:ok = all services healthy.
  • HTTP 503 + status:degraded = at least one service down (check supabase.connected and kv.connected in the JSON).