# Attachment Domain Handles file sharing between peers over WebRTC data channels. Files are announced, chunked into 64 KB pieces, streamed peer-to-peer as base64, and optionally persisted to disk (Electron) or kept in memory (browser). ## Module map ``` attachment/ ├── application/ │ ├── attachment.facade.ts Thin entry point, delegates to manager │ ├── attachment-manager.service.ts Orchestrates lifecycle, auto-download, peer listeners │ ├── attachment-transfer.service.ts P2P file transfer protocol (announce/request/chunk/cancel) │ ├── attachment-transfer-transport.service.ts Base64 encode/decode, chunked streaming │ ├── attachment-persistence.service.ts DB + filesystem persistence, migration from localStorage │ └── attachment-runtime.store.ts In-memory signal-based state (Maps for attachments, chunks, pending) │ ├── domain/ │ ├── attachment.models.ts Attachment type extending AttachmentMeta with runtime state │ ├── attachment.logic.ts isAttachmentMedia, shouldAutoRequestWhenWatched, shouldPersistDownloadedAttachment │ ├── attachment.constants.ts MAX_AUTO_SAVE_SIZE_BYTES = 10 MB │ ├── attachment-transfer.models.ts Protocol event types (file-announce, file-chunk, file-request, ...) │ └── attachment-transfer.constants.ts FILE_CHUNK_SIZE_BYTES = 64 KB, EWMA weights, error messages │ ├── infrastructure/ │ ├── attachment-storage.service.ts Electron filesystem access (save / read / delete) │ └── attachment-storage.helpers.ts sanitizeAttachmentRoomName, resolveAttachmentStorageBucket │ └── index.ts Barrel exports ``` ## Service composition The facade is a thin pass-through. All real work happens inside the manager, which coordinates the transfer service (protocol), persistence service (DB/disk), and runtime store (signals). ```mermaid graph TD Facade[AttachmentFacade] Manager[AttachmentManagerService] Transfer[AttachmentTransferService] Transport[AttachmentTransferTransportService] Persistence[AttachmentPersistenceService] Store[AttachmentRuntimeStore] Storage[AttachmentStorageService] Logic[attachment.logic] Facade --> Manager Manager --> Transfer Manager --> Persistence Manager --> Store Manager --> Logic Transfer --> Transport Transfer --> Store Persistence --> Storage Persistence --> Store Storage --> Helpers[attachment-storage.helpers] click Facade "application/attachment.facade.ts" "Thin entry point" _blank click Manager "application/attachment-manager.service.ts" "Orchestrates lifecycle" _blank click Transfer "application/attachment-transfer.service.ts" "P2P file transfer protocol" _blank click Transport "application/attachment-transfer-transport.service.ts" "Base64 encode/decode, chunked streaming" _blank click Persistence "application/attachment-persistence.service.ts" "DB + filesystem persistence" _blank click Store "application/attachment-runtime.store.ts" "In-memory signal-based state" _blank click Storage "infrastructure/attachment-storage.service.ts" "Electron filesystem access" _blank click Helpers "infrastructure/attachment-storage.helpers.ts" "Path helpers" _blank click Logic "domain/attachment.logic.ts" "Pure decision functions" _blank ``` ## File transfer protocol Files move between peers using a request/response pattern over the WebRTC data channel. The sender announces a file, the receiver requests it, and chunks flow back one by one. ```mermaid sequenceDiagram participant S as Sender participant R as Receiver S->>R: file-announce (id, name, size, mimeType) Note over R: Store metadata in runtime store Note over R: shouldAutoRequestWhenWatched? R->>S: file-request (attachmentId) Note over S: Look up file in runtime store or on disk loop Every 64 KB chunk S->>R: file-chunk (attachmentId, index, data, progress, speed) Note over R: Append to chunk buffer Note over R: Update progress + EWMA speed end Note over R: All chunks received Note over R: Reassemble blob Note over R: shouldPersistDownloadedAttachment? Save to disk ``` ### Failure handling If the sender cannot find the file, it replies with `file-not-found`. The transfer service then tries the next connected peer that has announced the same attachment. Either side can send `file-cancel` to abort a transfer in progress. ```mermaid sequenceDiagram participant R as Receiver participant P1 as Peer A participant P2 as Peer B R->>P1: file-request P1->>R: file-not-found Note over R: Try next peer R->>P2: file-request P2->>R: file-chunk (1/N) P2->>R: file-chunk (2/N) P2->>R: file-chunk (N/N) Note over R: Transfer complete ``` ## Auto-download rules When the user navigates to a room, the manager watches the route and decides which attachments to request automatically based on domain logic: | Condition | Auto-download? | |---|---| | Image or video, size <= 10 MB | Yes | | Image or video, size > 10 MB | No | | Non-media file | No | The decision lives in `shouldAutoRequestWhenWatched()` which calls `isAttachmentMedia()` and checks against `MAX_AUTO_SAVE_SIZE_BYTES`. ## Persistence On Electron, completed downloads are written to the app-data directory. The storage path is resolved per room and bucket: ``` {appDataPath}/{serverId}/{roomName}/{bucket}/{filename} ``` Room names are sanitised to remove filesystem-unsafe characters. The bucket is either `attachments` or `media` depending on the attachment type. `AttachmentPersistenceService` handles startup migration from an older localStorage-based format into the database, and restores attachment metadata from the DB on init. On browser builds, files stay in memory only. ## Runtime store `AttachmentRuntimeStore` is a signal-based in-memory store using `Map` instances for: - **attachments**: all known attachments keyed by ID - **chunks**: incoming chunk buffers during active transfers - **pendingRequests**: outbound requests waiting for a response - **cancellations**: IDs of transfers the user cancelled Components read attachment state reactively through the store's signals. The store has no persistence of its own; that responsibility belongs to the persistence service.