test: Initate instructions
This commit is contained in:
282
.agents/skills/playwright-e2e/SKILL.md
Normal file
282
.agents/skills/playwright-e2e/SKILL.md
Normal file
@@ -0,0 +1,282 @@
|
|||||||
|
---
|
||||||
|
name: playwright-e2e
|
||||||
|
description: >
|
||||||
|
Write and run Playwright E2E tests for MetoYou (Angular chat + WebRTC voice/video app).
|
||||||
|
Handles multi-browser-context WebRTC voice testing, fake media devices, signaling validation,
|
||||||
|
audio channel verification, and UI state assertions. Use when: "E2E test", "Playwright",
|
||||||
|
"end-to-end", "browser test", "test voice", "test call", "test WebRTC", "test chat",
|
||||||
|
"test login", "test video", "test screen share", "integration test", "multi-client test".
|
||||||
|
---
|
||||||
|
|
||||||
|
# Playwright E2E Testing — MetoYou
|
||||||
|
|
||||||
|
## Step 1 — Check Project Setup
|
||||||
|
|
||||||
|
Before writing any test, verify the Playwright infrastructure exists:
|
||||||
|
|
||||||
|
```
|
||||||
|
e2e/ # Test root (lives at repo root)
|
||||||
|
├── playwright.config.ts # Config
|
||||||
|
├── fixtures/ # Custom fixtures (multi-client, auth, etc.)
|
||||||
|
├── pages/ # Page Object Models
|
||||||
|
├── tests/ # Test specs
|
||||||
|
│ ├── auth/
|
||||||
|
│ ├── chat/
|
||||||
|
│ ├── voice/
|
||||||
|
│ └── settings/
|
||||||
|
└── helpers/ # Shared utilities (WebRTC introspection, etc.)
|
||||||
|
```
|
||||||
|
|
||||||
|
If missing, scaffold it. See [reference/project-setup.md](./reference/project-setup.md).
|
||||||
|
|
||||||
|
## Step 2 — Identify Test Category
|
||||||
|
|
||||||
|
| Request | Category | Key Patterns |
|
||||||
|
|---------|----------|-------------|
|
||||||
|
| Login, register, invite | **Auth** | Single browser context, form interaction |
|
||||||
|
| Send message, rooms, chat UI | **Chat** | May need 2 clients for real-time sync |
|
||||||
|
| Voice call, mute, deafen, audio | **Voice/WebRTC** | Multi-client, fake media, WebRTC introspection |
|
||||||
|
| Camera, video tiles | **Video** | Multi-client, fake video, stream validation |
|
||||||
|
| Screen share | **Screen Share** | Multi-client, display media mocking |
|
||||||
|
| Settings, themes | **Settings** | Single client, preference persistence |
|
||||||
|
|
||||||
|
For **Voice/WebRTC** and **Multi-client** tests, read [reference/multi-client-webrtc.md](./reference/multi-client-webrtc.md) immediately.
|
||||||
|
|
||||||
|
## Step 3 — Core Conventions
|
||||||
|
|
||||||
|
### Config Essentials
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// e2e/playwright.config.ts
|
||||||
|
import { defineConfig, devices } from '@playwright/test';
|
||||||
|
|
||||||
|
export default defineConfig({
|
||||||
|
testDir: './tests',
|
||||||
|
timeout: 60_000, // WebRTC needs longer timeouts
|
||||||
|
expect: { timeout: 10_000 },
|
||||||
|
retries: process.env.CI ? 2 : 0,
|
||||||
|
workers: 1, // Sequential — shared server state
|
||||||
|
reporter: [['html'], ['list']],
|
||||||
|
use: {
|
||||||
|
baseURL: 'http://localhost:4200',
|
||||||
|
trace: 'on-first-retry',
|
||||||
|
screenshot: 'only-on-failure',
|
||||||
|
video: 'on-first-retry',
|
||||||
|
permissions: ['microphone', 'camera'],
|
||||||
|
},
|
||||||
|
projects: [
|
||||||
|
{
|
||||||
|
name: 'chromium',
|
||||||
|
use: {
|
||||||
|
...devices['Desktop Chrome'],
|
||||||
|
launchOptions: {
|
||||||
|
args: [
|
||||||
|
'--use-fake-device-for-media-stream',
|
||||||
|
'--use-fake-ui-for-media-stream',
|
||||||
|
// Feed a specific audio file as fake mic input:
|
||||||
|
// '--use-file-for-fake-audio-capture=/path/to/audio.wav',
|
||||||
|
],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
webServer: [
|
||||||
|
{
|
||||||
|
command: 'cd server && npm run dev',
|
||||||
|
port: 3001,
|
||||||
|
reuseExistingServer: !process.env.CI,
|
||||||
|
timeout: 30_000,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
command: 'cd toju-app && npx ng serve',
|
||||||
|
port: 4200,
|
||||||
|
reuseExistingServer: !process.env.CI,
|
||||||
|
timeout: 60_000,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Selector Strategy
|
||||||
|
|
||||||
|
Use in this order — stop at the first that works:
|
||||||
|
|
||||||
|
1. `getByRole('button', { name: 'Mute' })` — accessible, resilient
|
||||||
|
2. `getByLabel('Email')` — form fields
|
||||||
|
3. `getByPlaceholder('Enter email')` — when label missing
|
||||||
|
4. `getByText('Welcome')` — visible text
|
||||||
|
5. `getByTestId('voice-controls')` — last resort, needs `data-testid`
|
||||||
|
6. `locator('app-voice-controls')` — Angular component selectors (acceptable in this project)
|
||||||
|
|
||||||
|
Angular component selectors (`app-*`) are stable in this project and acceptable as locators when semantic selectors are not feasible.
|
||||||
|
|
||||||
|
### Assertions — Always Web-First
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ Auto-retries until timeout
|
||||||
|
await expect(page.getByRole('heading')).toBeVisible();
|
||||||
|
await expect(page.getByRole('alert')).toHaveText('Saved');
|
||||||
|
await expect(page).toHaveURL(/\/room\//);
|
||||||
|
|
||||||
|
// ❌ No auto-retry — races with DOM
|
||||||
|
const text = await page.textContent('.msg');
|
||||||
|
expect(text).toBe('Saved');
|
||||||
|
```
|
||||||
|
|
||||||
|
### Anti-Patterns
|
||||||
|
|
||||||
|
| ❌ Don't | ✅ Do | Why |
|
||||||
|
|----------|-------|-----|
|
||||||
|
| `page.waitForTimeout(3000)` | `await expect(locator).toBeVisible()` | Hard waits are flaky |
|
||||||
|
| `expect(await el.isVisible())` | `await expect(el).toBeVisible()` | No auto-retry |
|
||||||
|
| `page.$('.btn')` | `page.getByRole('button')` | Fragile selector |
|
||||||
|
| `page.click('.submit')` | `page.getByRole('button', {name:'Submit'}).click()` | Not accessible |
|
||||||
|
| Shared state between tests | `test.beforeEach` for setup | Tests must be independent |
|
||||||
|
| `try/catch` around assertions | Let Playwright handle retries | Swallows real failures |
|
||||||
|
|
||||||
|
### Test Structure
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { test, expect } from '../fixtures/base';
|
||||||
|
|
||||||
|
test.describe('Feature Name', () => {
|
||||||
|
test('should do specific thing', async ({ page }) => {
|
||||||
|
await test.step('Navigate to page', async () => {
|
||||||
|
await page.goto('/login');
|
||||||
|
});
|
||||||
|
|
||||||
|
await test.step('Fill form', async () => {
|
||||||
|
await page.getByLabel('Email').fill('test@example.com');
|
||||||
|
});
|
||||||
|
|
||||||
|
await test.step('Verify result', async () => {
|
||||||
|
await expect(page).toHaveURL(/\/search/);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
Use `test.step()` for readability in complex flows.
|
||||||
|
|
||||||
|
## Step 4 — Page Object Models
|
||||||
|
|
||||||
|
Use POM for any test file with more than 2 tests. Match the app's route structure:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// e2e/pages/login.page.ts
|
||||||
|
import { type Page, type Locator } from '@playwright/test';
|
||||||
|
|
||||||
|
export class LoginPage {
|
||||||
|
readonly emailInput: Locator;
|
||||||
|
readonly passwordInput: Locator;
|
||||||
|
readonly submitButton: Locator;
|
||||||
|
|
||||||
|
constructor(private page: Page) {
|
||||||
|
this.emailInput = page.getByLabel('Email');
|
||||||
|
this.passwordInput = page.getByLabel('Password');
|
||||||
|
this.submitButton = page.getByRole('button', { name: /sign in|log in/i });
|
||||||
|
}
|
||||||
|
|
||||||
|
async goto() {
|
||||||
|
await this.page.goto('/login');
|
||||||
|
}
|
||||||
|
|
||||||
|
async login(email: string, password: string) {
|
||||||
|
await this.emailInput.fill(email);
|
||||||
|
await this.passwordInput.fill(password);
|
||||||
|
await this.submitButton.click();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key pages to model (match `app.routes.ts`):**
|
||||||
|
|
||||||
|
| Route | Page Object | Component |
|
||||||
|
|-------|-------------|-----------|
|
||||||
|
| `/login` | `LoginPage` | `LoginComponent` |
|
||||||
|
| `/register` | `RegisterPage` | `RegisterComponent` |
|
||||||
|
| `/search` | `ServerSearchPage` | `ServerSearchComponent` |
|
||||||
|
| `/room/:roomId` | `ChatRoomPage` | `ChatRoomComponent` |
|
||||||
|
| `/settings` | `SettingsPage` | `SettingsComponent` |
|
||||||
|
| `/invite/:inviteId` | `InvitePage` | `InviteComponent` |
|
||||||
|
|
||||||
|
## Step 5 — MetoYou App Architecture Context
|
||||||
|
|
||||||
|
The agent writing tests MUST understand these domain boundaries:
|
||||||
|
|
||||||
|
### Voice/WebRTC Stack
|
||||||
|
|
||||||
|
| Layer | What It Does | Test Relevance |
|
||||||
|
|-------|-------------|----------------|
|
||||||
|
| `VoiceConnectionFacade` | High-level voice API (connect/disconnect/mute/deafen) | State signals to assert against |
|
||||||
|
| `VoiceSessionFacade` | Session lifecycle, workspace layout | UI mode changes |
|
||||||
|
| `VoiceActivityService` | Speaking detection (RMS threshold 0.015) | `isSpeaking()` signal validation |
|
||||||
|
| `VoicePlaybackService` | Per-peer GainNode (0–200% volume) | Volume level assertions |
|
||||||
|
| `PeerConnectionManager` | RTCPeerConnection lifecycle | Connection state introspection |
|
||||||
|
| `MediaManager` | getUserMedia, mute, gain chain | Track state validation |
|
||||||
|
| `SignalingManager` | WebSocket per signal URL | Connection establishment |
|
||||||
|
|
||||||
|
### Voice UI Components
|
||||||
|
|
||||||
|
| Component | Selector | Contains |
|
||||||
|
|-----------|----------|----------|
|
||||||
|
| `VoiceWorkspaceComponent` | `app-voice-workspace` | Stream tiles, layout |
|
||||||
|
| `VoiceControlsComponent` | `app-voice-controls` | Mute, camera, screen share, hang-up buttons |
|
||||||
|
| `FloatingVoiceControlsComponent` | `app-floating-voice-controls` | Floating variant of controls |
|
||||||
|
| `VoiceWorkspaceStreamTileComponent` | `app-voice-workspace-stream-tile` | Per-peer audio/video tile |
|
||||||
|
|
||||||
|
### Voice UI Icons (Lucide)
|
||||||
|
|
||||||
|
| Icon | Meaning |
|
||||||
|
|------|---------|
|
||||||
|
| `lucideMic` / `lucideMicOff` | Mute toggle |
|
||||||
|
| `lucideVideo` / `lucideVideoOff` | Camera toggle |
|
||||||
|
| `lucideMonitor` / `lucideMonitorOff` | Screen share toggle |
|
||||||
|
| `lucidePhoneOff` | Hang up / disconnect |
|
||||||
|
| `lucideHeadphones` | Deafen state |
|
||||||
|
| `lucideVolume2` / `lucideVolumeX` | Volume indicator |
|
||||||
|
|
||||||
|
### Server & Signaling
|
||||||
|
|
||||||
|
- **Signaling server**: Port `3001` (HTTP by default, HTTPS if `SSL=true`)
|
||||||
|
- **Angular dev server**: Port `4200`
|
||||||
|
- **WebSocket signaling**: Upgrades on same port as server
|
||||||
|
- **Protocol**: `identify` → `server_users` → SDP offer/answer → ICE candidates
|
||||||
|
- **PING/PONG**: Every 30s, 45s timeout
|
||||||
|
|
||||||
|
## Step 6 — Validation Workflow
|
||||||
|
|
||||||
|
After generating any test:
|
||||||
|
|
||||||
|
```
|
||||||
|
1. npx playwright install --with-deps chromium # First time only
|
||||||
|
2. npx playwright test --project=chromium # Run tests
|
||||||
|
3. npx playwright test --ui # Interactive debug
|
||||||
|
4. npx playwright show-report # HTML report
|
||||||
|
```
|
||||||
|
|
||||||
|
If the test involves WebRTC, always verify:
|
||||||
|
- Fake media flags are set in config
|
||||||
|
- Timeouts are sufficient (60s+ for connection establishment)
|
||||||
|
- `workers: 1` if tests share server state
|
||||||
|
- Browser permissions granted for microphone/camera
|
||||||
|
|
||||||
|
## Quick Reference — Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npx playwright test # Run all
|
||||||
|
npx playwright test --ui # Interactive UI
|
||||||
|
npx playwright test --debug # Step-through debugger
|
||||||
|
npx playwright test tests/voice/ # Voice tests only
|
||||||
|
npx playwright test --project=chromium # Single browser
|
||||||
|
npx playwright test -g "voice connects" # By test name
|
||||||
|
npx playwright show-report # HTML report
|
||||||
|
npx playwright codegen http://localhost:4200 # Record test
|
||||||
|
```
|
||||||
|
|
||||||
|
## Reference Files
|
||||||
|
|
||||||
|
| File | When to Read |
|
||||||
|
|------|-------------|
|
||||||
|
| [reference/multi-client-webrtc.md](./reference/multi-client-webrtc.md) | Voice/video/WebRTC tests, multi-browser contexts, audio validation |
|
||||||
|
| [reference/project-setup.md](./reference/project-setup.md) | First-time scaffold, dependency installation, config creation |
|
||||||
536
.agents/skills/playwright-e2e/reference/multi-client-webrtc.md
Normal file
536
.agents/skills/playwright-e2e/reference/multi-client-webrtc.md
Normal file
@@ -0,0 +1,536 @@
|
|||||||
|
# Multi-Client WebRTC Testing
|
||||||
|
|
||||||
|
This reference covers the hardest E2E testing scenario in MetoYou: verifying that voice/video connections actually work between multiple clients.
|
||||||
|
|
||||||
|
## Core Concept: Multiple Browser Contexts
|
||||||
|
|
||||||
|
Playwright can create multiple **independent** browser contexts within a single test. Each context is an isolated session (separate cookies, storage, WebRTC state). This is how we simulate multiple users.
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { test, expect, chromium } from '@playwright/test';
|
||||||
|
|
||||||
|
test('two users can voice chat', async () => {
|
||||||
|
const browser = await chromium.launch({
|
||||||
|
args: [
|
||||||
|
'--use-fake-device-for-media-stream',
|
||||||
|
'--use-fake-ui-for-media-stream',
|
||||||
|
'--use-file-for-fake-audio-capture=e2e/fixtures/test-tone-440hz.wav',
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
const contextA = await browser.newContext({
|
||||||
|
permissions: ['microphone', 'camera'],
|
||||||
|
});
|
||||||
|
const contextB = await browser.newContext({
|
||||||
|
permissions: ['microphone', 'camera'],
|
||||||
|
});
|
||||||
|
|
||||||
|
const alice = await contextA.newPage();
|
||||||
|
const bob = await contextB.newPage();
|
||||||
|
|
||||||
|
// ... test logic with alice and bob ...
|
||||||
|
|
||||||
|
await contextA.close();
|
||||||
|
await contextB.close();
|
||||||
|
await browser.close();
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Custom Fixture: Multi-Client
|
||||||
|
|
||||||
|
Create a reusable fixture for multi-client tests:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// e2e/fixtures/multi-client.ts
|
||||||
|
import { test as base, chromium, type Page, type BrowserContext, type Browser } from '@playwright/test';
|
||||||
|
|
||||||
|
type Client = {
|
||||||
|
page: Page;
|
||||||
|
context: BrowserContext;
|
||||||
|
};
|
||||||
|
|
||||||
|
type MultiClientFixture = {
|
||||||
|
createClient: () => Promise<Client>;
|
||||||
|
browser: Browser;
|
||||||
|
};
|
||||||
|
|
||||||
|
export const test = base.extend<MultiClientFixture>({
|
||||||
|
browser: async ({}, use) => {
|
||||||
|
const browser = await chromium.launch({
|
||||||
|
args: [
|
||||||
|
'--use-fake-device-for-media-stream',
|
||||||
|
'--use-fake-ui-for-media-stream',
|
||||||
|
],
|
||||||
|
});
|
||||||
|
await use(browser);
|
||||||
|
await browser.close();
|
||||||
|
},
|
||||||
|
|
||||||
|
createClient: async ({ browser }, use) => {
|
||||||
|
const clients: Client[] = [];
|
||||||
|
|
||||||
|
const factory = async (): Promise<Client> => {
|
||||||
|
const context = await browser.newContext({
|
||||||
|
permissions: ['microphone', 'camera'],
|
||||||
|
baseURL: 'http://localhost:4200',
|
||||||
|
});
|
||||||
|
const page = await context.newPage();
|
||||||
|
clients.push({ page, context });
|
||||||
|
return { page, context };
|
||||||
|
};
|
||||||
|
|
||||||
|
await use(factory);
|
||||||
|
|
||||||
|
// Cleanup
|
||||||
|
for (const client of clients) {
|
||||||
|
await client.context.close();
|
||||||
|
}
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
export { expect } from '@playwright/test';
|
||||||
|
```
|
||||||
|
|
||||||
|
**Usage:**
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { test, expect } from '../fixtures/multi-client';
|
||||||
|
|
||||||
|
test('voice call connects between two users', async ({ createClient }) => {
|
||||||
|
const alice = await createClient();
|
||||||
|
const bob = await createClient();
|
||||||
|
|
||||||
|
// Login both users
|
||||||
|
await alice.page.goto('/login');
|
||||||
|
await bob.page.goto('/login');
|
||||||
|
// ... login flows ...
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Fake Media Devices
|
||||||
|
|
||||||
|
Chromium's fake device flags are essential for headless WebRTC testing:
|
||||||
|
|
||||||
|
| Flag | Purpose |
|
||||||
|
|------|---------|
|
||||||
|
| `--use-fake-device-for-media-stream` | Provides fake mic/camera devices (no real hardware needed) |
|
||||||
|
| `--use-fake-ui-for-media-stream` | Auto-grants device permission prompts |
|
||||||
|
| `--use-file-for-fake-audio-capture=<path>` | Feeds a WAV/WebM file as fake mic input |
|
||||||
|
| `--use-file-for-fake-video-capture=<path>` | Feeds a Y4M/MJPEG file as fake video input |
|
||||||
|
|
||||||
|
### Test Audio File
|
||||||
|
|
||||||
|
Create a 440Hz sine wave test tone for audio verification:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate a 5-second 440Hz mono WAV test tone
|
||||||
|
ffmpeg -f lavfi -i "sine=frequency=440:duration=5" -ar 48000 -ac 1 e2e/fixtures/test-tone-440hz.wav
|
||||||
|
```
|
||||||
|
|
||||||
|
Or use Playwright's built-in fake device which generates a test pattern (beep) automatically when `--use-fake-device-for-media-stream` is set without specifying a file.
|
||||||
|
|
||||||
|
## WebRTC Connection Introspection
|
||||||
|
|
||||||
|
### Checking RTCPeerConnection State
|
||||||
|
|
||||||
|
Inject JavaScript to inspect WebRTC internals via `page.evaluate()`:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// e2e/helpers/webrtc-helpers.ts
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get all RTCPeerConnection instances and their states.
|
||||||
|
* Requires the app to expose connections (or use a monkey-patch approach).
|
||||||
|
*/
|
||||||
|
export async function getPeerConnectionStates(page: Page): Promise<Array<{
|
||||||
|
connectionState: string;
|
||||||
|
iceConnectionState: string;
|
||||||
|
signalingState: string;
|
||||||
|
}>> {
|
||||||
|
return page.evaluate(() => {
|
||||||
|
// Approach 1: Use chrome://webrtc-internals equivalent
|
||||||
|
// Approach 2: Monkey-patch RTCPeerConnection (install before app loads)
|
||||||
|
// Approach 3: Expose from app (preferred for this project)
|
||||||
|
|
||||||
|
// MetoYou exposes via Angular — access the injector
|
||||||
|
const appRef = (window as any).ng?.getComponent(document.querySelector('app-root'));
|
||||||
|
// This needs adaptation based on actual exposure method
|
||||||
|
|
||||||
|
// Fallback: Use performance.getEntriesByType or WebRTC stats
|
||||||
|
return [];
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Wait for at least one peer connection to reach 'connected' state.
|
||||||
|
*/
|
||||||
|
export async function waitForPeerConnected(page: Page, timeout = 30_000): Promise<void> {
|
||||||
|
await page.waitForFunction(() => {
|
||||||
|
// Check if any RTCPeerConnection reached 'connected'
|
||||||
|
return (window as any).__rtcConnections?.some(
|
||||||
|
(pc: RTCPeerConnection) => pc.connectionState === 'connected'
|
||||||
|
) ?? false;
|
||||||
|
}, { timeout });
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get WebRTC stats for audio tracks (inbound/outbound).
|
||||||
|
*/
|
||||||
|
export async function getAudioStats(page: Page): Promise<{
|
||||||
|
outbound: { bytesSent: number; packetsSent: number } | null;
|
||||||
|
inbound: { bytesReceived: number; packetsReceived: number } | null;
|
||||||
|
}> {
|
||||||
|
return page.evaluate(async () => {
|
||||||
|
const connections = (window as any).__rtcConnections as RTCPeerConnection[] | undefined;
|
||||||
|
if (!connections?.length) return { outbound: null, inbound: null };
|
||||||
|
|
||||||
|
const pc = connections[0];
|
||||||
|
const stats = await pc.getStats();
|
||||||
|
|
||||||
|
let outbound: any = null;
|
||||||
|
let inbound: any = null;
|
||||||
|
|
||||||
|
stats.forEach((report) => {
|
||||||
|
if (report.type === 'outbound-rtp' && report.kind === 'audio') {
|
||||||
|
outbound = { bytesSent: report.bytesSent, packetsSent: report.packetsSent };
|
||||||
|
}
|
||||||
|
if (report.type === 'inbound-rtp' && report.kind === 'audio') {
|
||||||
|
inbound = { bytesReceived: report.bytesReceived, packetsReceived: report.packetsReceived };
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
return { outbound, inbound };
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### RTCPeerConnection Monkey-Patch
|
||||||
|
|
||||||
|
To track all peer connections created by the app, inject a monkey-patch **before** navigation:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
/**
|
||||||
|
* Install RTCPeerConnection tracking on a page BEFORE navigating.
|
||||||
|
* Call this immediately after page creation, before any goto().
|
||||||
|
*/
|
||||||
|
export async function installWebRTCTracking(page: Page): Promise<void> {
|
||||||
|
await page.addInitScript(() => {
|
||||||
|
const connections: RTCPeerConnection[] = [];
|
||||||
|
(window as any).__rtcConnections = connections;
|
||||||
|
|
||||||
|
const OriginalRTCPeerConnection = window.RTCPeerConnection;
|
||||||
|
(window as any).RTCPeerConnection = function (...args: any[]) {
|
||||||
|
const pc = new OriginalRTCPeerConnection(...args);
|
||||||
|
connections.push(pc);
|
||||||
|
|
||||||
|
pc.addEventListener('connectionstatechange', () => {
|
||||||
|
(window as any).__lastRtcState = pc.connectionState;
|
||||||
|
});
|
||||||
|
|
||||||
|
// Track remote streams
|
||||||
|
pc.addEventListener('track', (event) => {
|
||||||
|
if (!((window as any).__rtcRemoteTracks)) {
|
||||||
|
(window as any).__rtcRemoteTracks = [];
|
||||||
|
}
|
||||||
|
(window as any).__rtcRemoteTracks.push({
|
||||||
|
kind: event.track.kind,
|
||||||
|
id: event.track.id,
|
||||||
|
readyState: event.track.readyState,
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
return pc;
|
||||||
|
} as any;
|
||||||
|
|
||||||
|
// Preserve prototype chain
|
||||||
|
(window as any).RTCPeerConnection.prototype = OriginalRTCPeerConnection.prototype;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Voice Call Test Pattern — Full Example
|
||||||
|
|
||||||
|
This is the canonical pattern for testing that voice actually connects between two clients:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { test, expect } from '../fixtures/multi-client';
|
||||||
|
import { installWebRTCTracking, getAudioStats } from '../helpers/webrtc-helpers';
|
||||||
|
|
||||||
|
test.describe('Voice Call', () => {
|
||||||
|
test('two users connect voice and exchange audio', async ({ createClient }) => {
|
||||||
|
const alice = await createClient();
|
||||||
|
const bob = await createClient();
|
||||||
|
|
||||||
|
// Install WebRTC tracking BEFORE navigation
|
||||||
|
await installWebRTCTracking(alice.page);
|
||||||
|
await installWebRTCTracking(bob.page);
|
||||||
|
|
||||||
|
await test.step('Both users log in', async () => {
|
||||||
|
// Login Alice
|
||||||
|
await alice.page.goto('/login');
|
||||||
|
await alice.page.getByLabel('Email').fill('alice@test.com');
|
||||||
|
await alice.page.getByLabel('Password').fill('password123');
|
||||||
|
await alice.page.getByRole('button', { name: /sign in/i }).click();
|
||||||
|
await expect(alice.page).toHaveURL(/\/search/);
|
||||||
|
|
||||||
|
// Login Bob
|
||||||
|
await bob.page.goto('/login');
|
||||||
|
await bob.page.getByLabel('Email').fill('bob@test.com');
|
||||||
|
await bob.page.getByLabel('Password').fill('password123');
|
||||||
|
await bob.page.getByRole('button', { name: /sign in/i }).click();
|
||||||
|
await expect(bob.page).toHaveURL(/\/search/);
|
||||||
|
});
|
||||||
|
|
||||||
|
await test.step('Both users join the same room', async () => {
|
||||||
|
// Navigate to a shared room (adapt URL to actual room ID)
|
||||||
|
const roomUrl = '/room/test-room-id';
|
||||||
|
await alice.page.goto(roomUrl);
|
||||||
|
await bob.page.goto(roomUrl);
|
||||||
|
|
||||||
|
// Verify both are in the room
|
||||||
|
await expect(alice.page.locator('app-chat-room')).toBeVisible();
|
||||||
|
await expect(bob.page.locator('app-chat-room')).toBeVisible();
|
||||||
|
});
|
||||||
|
|
||||||
|
await test.step('Alice starts voice', async () => {
|
||||||
|
// Click the voice/call join button (adapt selector to actual UI)
|
||||||
|
await alice.page.getByRole('button', { name: /join voice|connect/i }).click();
|
||||||
|
|
||||||
|
// Voice workspace should appear
|
||||||
|
await expect(alice.page.locator('app-voice-workspace')).toBeVisible();
|
||||||
|
|
||||||
|
// Voice controls should be visible
|
||||||
|
await expect(alice.page.locator('app-voice-controls')).toBeVisible();
|
||||||
|
});
|
||||||
|
|
||||||
|
await test.step('Bob joins voice', async () => {
|
||||||
|
await bob.page.getByRole('button', { name: /join voice|connect/i }).click();
|
||||||
|
await expect(bob.page.locator('app-voice-workspace')).toBeVisible();
|
||||||
|
});
|
||||||
|
|
||||||
|
await test.step('WebRTC connection establishes', async () => {
|
||||||
|
// Wait for peer connection to reach 'connected' on both sides
|
||||||
|
await alice.page.waitForFunction(
|
||||||
|
() => (window as any).__rtcConnections?.some(
|
||||||
|
(pc: any) => pc.connectionState === 'connected'
|
||||||
|
),
|
||||||
|
{ timeout: 30_000 }
|
||||||
|
);
|
||||||
|
await bob.page.waitForFunction(
|
||||||
|
() => (window as any).__rtcConnections?.some(
|
||||||
|
(pc: any) => pc.connectionState === 'connected'
|
||||||
|
),
|
||||||
|
{ timeout: 30_000 }
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
await test.step('Audio is flowing in both directions', async () => {
|
||||||
|
// Wait a moment for audio stats to accumulate
|
||||||
|
await alice.page.waitForTimeout(3_000); // Acceptable here: waiting for stats accumulation
|
||||||
|
|
||||||
|
// Check Alice is sending audio
|
||||||
|
const aliceStats = await getAudioStats(alice.page);
|
||||||
|
expect(aliceStats.outbound).not.toBeNull();
|
||||||
|
expect(aliceStats.outbound!.bytesSent).toBeGreaterThan(0);
|
||||||
|
expect(aliceStats.outbound!.packetsSent).toBeGreaterThan(0);
|
||||||
|
|
||||||
|
// Check Bob is sending audio
|
||||||
|
const bobStats = await getAudioStats(bob.page);
|
||||||
|
expect(bobStats.outbound).not.toBeNull();
|
||||||
|
expect(bobStats.outbound!.bytesSent).toBeGreaterThan(0);
|
||||||
|
|
||||||
|
// Check Alice receives Bob's audio
|
||||||
|
expect(aliceStats.inbound).not.toBeNull();
|
||||||
|
expect(aliceStats.inbound!.bytesReceived).toBeGreaterThan(0);
|
||||||
|
|
||||||
|
// Check Bob receives Alice's audio
|
||||||
|
expect(bobStats.inbound).not.toBeNull();
|
||||||
|
expect(bobStats.inbound!.bytesReceived).toBeGreaterThan(0);
|
||||||
|
});
|
||||||
|
|
||||||
|
await test.step('Voice UI states are correct', async () => {
|
||||||
|
// Both should see stream tiles for each other
|
||||||
|
await expect(alice.page.locator('app-voice-workspace-stream-tile')).toHaveCount(2);
|
||||||
|
await expect(bob.page.locator('app-voice-workspace-stream-tile')).toHaveCount(2);
|
||||||
|
|
||||||
|
// Mute button should be visible and in unmuted state
|
||||||
|
// (lucideMic icon visible, lucideMicOff NOT visible)
|
||||||
|
await expect(alice.page.locator('app-voice-controls')).toBeVisible();
|
||||||
|
});
|
||||||
|
|
||||||
|
await test.step('Mute toggles correctly', async () => {
|
||||||
|
// Alice mutes
|
||||||
|
await alice.page.getByRole('button', { name: /mute/i }).click();
|
||||||
|
|
||||||
|
// Alice's local UI shows muted state
|
||||||
|
// Bob should see Alice as muted (mute indicator on her tile)
|
||||||
|
|
||||||
|
// Verify no audio being sent from Alice after mute
|
||||||
|
const preStats = await getAudioStats(alice.page);
|
||||||
|
await alice.page.waitForTimeout(2_000);
|
||||||
|
const postStats = await getAudioStats(alice.page);
|
||||||
|
|
||||||
|
// Bytes sent should not increase (or increase minimally — comfort noise)
|
||||||
|
// The exact assertion depends on whether mute stops the track or sends silence
|
||||||
|
});
|
||||||
|
|
||||||
|
await test.step('Alice hangs up', async () => {
|
||||||
|
await alice.page.getByRole('button', { name: /hang up|disconnect|leave/i }).click();
|
||||||
|
|
||||||
|
// Voice workspace should disappear for Alice
|
||||||
|
await expect(alice.page.locator('app-voice-workspace')).not.toBeVisible();
|
||||||
|
|
||||||
|
// Bob should see Alice's tile disappear
|
||||||
|
await expect(bob.page.locator('app-voice-workspace-stream-tile')).toHaveCount(1);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Verifying Audio Is Actually Received
|
||||||
|
|
||||||
|
Beyond checking `bytesReceived > 0`, you can verify actual audio energy:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
/**
|
||||||
|
* Check if audio energy is present on a received stream.
|
||||||
|
* Uses Web Audio AnalyserNode — same approach as MetoYou's VoiceActivityService.
|
||||||
|
*/
|
||||||
|
export async function hasAudioEnergy(page: Page): Promise<boolean> {
|
||||||
|
return page.evaluate(async () => {
|
||||||
|
const connections = (window as any).__rtcConnections as RTCPeerConnection[];
|
||||||
|
if (!connections?.length) return false;
|
||||||
|
|
||||||
|
for (const pc of connections) {
|
||||||
|
const receivers = pc.getReceivers();
|
||||||
|
for (const receiver of receivers) {
|
||||||
|
if (receiver.track.kind !== 'audio' || receiver.track.readyState !== 'live') continue;
|
||||||
|
|
||||||
|
const audioCtx = new AudioContext();
|
||||||
|
const source = audioCtx.createMediaStreamSource(new MediaStream([receiver.track]));
|
||||||
|
const analyser = audioCtx.createAnalyser();
|
||||||
|
analyser.fftSize = 256;
|
||||||
|
source.connect(analyser);
|
||||||
|
|
||||||
|
// Sample over 500ms
|
||||||
|
const dataArray = new Float32Array(analyser.frequencyBinCount);
|
||||||
|
await new Promise(resolve => setTimeout(resolve, 500));
|
||||||
|
analyser.getFloatTimeDomainData(dataArray);
|
||||||
|
|
||||||
|
// Calculate RMS (same as VoiceActivityService)
|
||||||
|
let sum = 0;
|
||||||
|
for (const sample of dataArray) {
|
||||||
|
sum += sample * sample;
|
||||||
|
}
|
||||||
|
const rms = Math.sqrt(sum / dataArray.length);
|
||||||
|
|
||||||
|
audioCtx.close();
|
||||||
|
|
||||||
|
// MetoYou uses threshold 0.015 — use lower threshold for test
|
||||||
|
if (rms > 0.005) return true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Testing Speaker/Playback Volume
|
||||||
|
|
||||||
|
MetoYou uses per-peer GainNode chains (0–200%). To verify:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
await test.step('Bob adjusts Alice volume to 50%', async () => {
|
||||||
|
// Interact with volume slider in stream tile
|
||||||
|
// (adapt selector to actual volume control UI)
|
||||||
|
const volumeSlider = bob.page.locator('app-voice-workspace-stream-tile')
|
||||||
|
.filter({ hasText: 'Alice' })
|
||||||
|
.getByRole('slider');
|
||||||
|
|
||||||
|
await volumeSlider.fill('50');
|
||||||
|
|
||||||
|
// Verify the gain was set (check via WebRTC or app state)
|
||||||
|
const gain = await bob.page.evaluate(() => {
|
||||||
|
// Access VoicePlaybackService through Angular DI if exposed
|
||||||
|
// Or check audio element volume
|
||||||
|
const audioElements = document.querySelectorAll('audio');
|
||||||
|
return audioElements[0]?.volume;
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Testing Screen Share
|
||||||
|
|
||||||
|
Screen share requires `getDisplayMedia()` which cannot be auto-granted. Options:
|
||||||
|
|
||||||
|
1. **Mock at the browser level** — use `page.addInitScript()` to replace `getDisplayMedia` with a fake stream
|
||||||
|
2. **Use Chromium flags** — `--auto-select-desktop-capture-source=Entire screen`
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Mock getDisplayMedia before navigation
|
||||||
|
await page.addInitScript(() => {
|
||||||
|
navigator.mediaDevices.getDisplayMedia = async () => {
|
||||||
|
// Create a simple canvas stream as fake screen share
|
||||||
|
const canvas = document.createElement('canvas');
|
||||||
|
canvas.width = 1280;
|
||||||
|
canvas.height = 720;
|
||||||
|
const ctx = canvas.getContext('2d')!;
|
||||||
|
ctx.fillStyle = '#4a90d9';
|
||||||
|
ctx.fillRect(0, 0, 1280, 720);
|
||||||
|
ctx.fillStyle = 'white';
|
||||||
|
ctx.font = '48px sans-serif';
|
||||||
|
ctx.fillText('Fake Screen Share', 400, 380);
|
||||||
|
return canvas.captureStream(30);
|
||||||
|
};
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Debugging Tips
|
||||||
|
|
||||||
|
### Trace Viewer
|
||||||
|
When a WebRTC test fails, the trace captures network requests, console logs, and screenshots:
|
||||||
|
```bash
|
||||||
|
npx playwright show-trace test-results/trace.zip
|
||||||
|
```
|
||||||
|
|
||||||
|
### Console Log Forwarding
|
||||||
|
Forward browser console to Node for real-time debugging:
|
||||||
|
```typescript
|
||||||
|
page.on('console', msg => console.log(`[${name}]`, msg.text()));
|
||||||
|
page.on('pageerror', err => console.error(`[${name}] PAGE ERROR:`, err.message));
|
||||||
|
```
|
||||||
|
|
||||||
|
### WebRTC Internals
|
||||||
|
Chromium exposes WebRTC stats at `chrome://webrtc-internals/`. In Playwright, access the same data via:
|
||||||
|
```typescript
|
||||||
|
const stats = await page.evaluate(async () => {
|
||||||
|
const pcs = (window as any).__rtcConnections;
|
||||||
|
return Promise.all(pcs.map(async (pc: any) => {
|
||||||
|
const stats = await pc.getStats();
|
||||||
|
const result: any[] = [];
|
||||||
|
stats.forEach((report: any) => result.push(report));
|
||||||
|
return result;
|
||||||
|
}));
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Timeout Guidelines
|
||||||
|
|
||||||
|
| Operation | Recommended Timeout |
|
||||||
|
|-----------|-------------------|
|
||||||
|
| Page navigation | 10s (default) |
|
||||||
|
| Login flow | 15s |
|
||||||
|
| WebRTC connection establishment | 30s |
|
||||||
|
| ICE negotiation (TURN fallback) | 45s |
|
||||||
|
| Audio stats accumulation | 3–5s after connection |
|
||||||
|
| Full voice test (end-to-end) | 90s |
|
||||||
|
| Screen share setup | 15s |
|
||||||
|
|
||||||
|
## Parallelism Warning
|
||||||
|
|
||||||
|
WebRTC multi-client tests **must** run with `workers: 1` (sequential) because:
|
||||||
|
- All clients share the same signaling server instance
|
||||||
|
- Server state (rooms, users) is mutable
|
||||||
|
- ICE candidates reference `localhost` — port conflicts possible with parallel launches
|
||||||
|
|
||||||
|
If you need parallelism, use separate server instances with different ports per worker.
|
||||||
151
.agents/skills/playwright-e2e/reference/project-setup.md
Normal file
151
.agents/skills/playwright-e2e/reference/project-setup.md
Normal file
@@ -0,0 +1,151 @@
|
|||||||
|
# Project Setup — Playwright E2E for MetoYou
|
||||||
|
|
||||||
|
Before creating any tests or changing them read AGENTS.md for a understanding how the application works.
|
||||||
|
|
||||||
|
## First-Time Scaffold
|
||||||
|
|
||||||
|
### 1. Install Dependencies
|
||||||
|
|
||||||
|
From the **repository root**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm install -D @playwright/test
|
||||||
|
npx playwright install --with-deps chromium
|
||||||
|
```
|
||||||
|
|
||||||
|
Only Chromium is needed — WebRTC fake media flags are Chromium-only. Add Firefox/WebKit later if needed for non-WebRTC tests.
|
||||||
|
|
||||||
|
### 2. Create Directory Structure
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir -p e2e/{tests/{auth,chat,voice,settings},fixtures,pages,helpers}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Create Config
|
||||||
|
|
||||||
|
Create `e2e/playwright.config.ts`:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { defineConfig, devices } from '@playwright/test';
|
||||||
|
|
||||||
|
export default defineConfig({
|
||||||
|
testDir: './tests',
|
||||||
|
timeout: 60_000,
|
||||||
|
expect: { timeout: 10_000 },
|
||||||
|
retries: process.env.CI ? 2 : 0,
|
||||||
|
workers: 1,
|
||||||
|
reporter: [['html', { outputFolder: '../test-results/html-report' }], ['list']],
|
||||||
|
outputDir: '../test-results/artifacts',
|
||||||
|
use: {
|
||||||
|
baseURL: 'http://localhost:4200',
|
||||||
|
trace: 'on-first-retry',
|
||||||
|
screenshot: 'only-on-failure',
|
||||||
|
video: 'on-first-retry',
|
||||||
|
permissions: ['microphone', 'camera'],
|
||||||
|
},
|
||||||
|
projects: [
|
||||||
|
{
|
||||||
|
name: 'chromium',
|
||||||
|
use: {
|
||||||
|
...devices['Desktop Chrome'],
|
||||||
|
launchOptions: {
|
||||||
|
args: [
|
||||||
|
'--use-fake-device-for-media-stream',
|
||||||
|
'--use-fake-ui-for-media-stream',
|
||||||
|
],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
webServer: [
|
||||||
|
{
|
||||||
|
command: 'cd ../server && npm run dev',
|
||||||
|
port: 3001,
|
||||||
|
reuseExistingServer: !process.env.CI,
|
||||||
|
timeout: 30_000,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
command: 'cd ../toju-app && npx ng serve',
|
||||||
|
port: 4200,
|
||||||
|
reuseExistingServer: !process.env.CI,
|
||||||
|
timeout: 60_000,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Create Base Fixture
|
||||||
|
|
||||||
|
Create `e2e/fixtures/base.ts`:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { test as base } from '@playwright/test';
|
||||||
|
|
||||||
|
export const test = base.extend({
|
||||||
|
// Add common fixtures here as the test suite grows
|
||||||
|
// Examples: authenticated page, test data seeding, etc.
|
||||||
|
});
|
||||||
|
|
||||||
|
export { expect } from '@playwright/test';
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. Create Multi-Client Fixture
|
||||||
|
|
||||||
|
Create `e2e/fixtures/multi-client.ts` — see [multi-client-webrtc.md](./multi-client-webrtc.md) for the full fixture code.
|
||||||
|
|
||||||
|
### 6. Create WebRTC Helpers
|
||||||
|
|
||||||
|
Create `e2e/helpers/webrtc-helpers.ts` — see [multi-client-webrtc.md](./multi-client-webrtc.md) for helper functions.
|
||||||
|
|
||||||
|
### 7. Add npm Scripts
|
||||||
|
|
||||||
|
Add to root `package.json`:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"scripts": {
|
||||||
|
"test:e2e": "cd e2e && npx playwright test",
|
||||||
|
"test:e2e:ui": "cd e2e && npx playwright test --ui",
|
||||||
|
"test:e2e:debug": "cd e2e && npx playwright test --debug",
|
||||||
|
"test:e2e:report": "cd e2e && npx playwright show-report ../test-results/html-report"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 8. Update .gitignore
|
||||||
|
|
||||||
|
Add to `.gitignore`:
|
||||||
|
|
||||||
|
```
|
||||||
|
# Playwright
|
||||||
|
test-results/
|
||||||
|
e2e/playwright-report/
|
||||||
|
```
|
||||||
|
|
||||||
|
### 9. Generate Test Audio Fixture (Optional)
|
||||||
|
|
||||||
|
For voice tests with controlled audio input:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Requires ffmpeg
|
||||||
|
ffmpeg -f lavfi -i "sine=frequency=440:duration=5" -ar 48000 -ac 1 e2e/fixtures/test-tone-440hz.wav
|
||||||
|
```
|
||||||
|
|
||||||
|
## Existing Dev Stack Integration
|
||||||
|
|
||||||
|
The tests assume the standard MetoYou dev stack:
|
||||||
|
|
||||||
|
- **Signaling server** at `http://localhost:3001` (via `server/npm run dev`)
|
||||||
|
- **Angular dev server** at `http://localhost:4200` (via `toju-app/npx ng serve`)
|
||||||
|
|
||||||
|
The `webServer` config in `playwright.config.ts` starts these automatically if not already running. When running `npm run dev` (full Electron stack) separately, tests will reuse the existing servers.
|
||||||
|
|
||||||
|
## Test Data Requirements
|
||||||
|
|
||||||
|
E2E tests need user accounts to log in with. Options:
|
||||||
|
|
||||||
|
1. **Seed via API** — create users in `test.beforeAll` via the server REST API
|
||||||
|
2. **Pre-seeded database** — maintain a test SQLite database with known accounts
|
||||||
|
3. **Register in test** — use the `/register` flow as a setup step (slower but self-contained)
|
||||||
|
|
||||||
|
Recommended: Option 3 for initial setup, migrate to Option 1 as the test suite grows.
|
||||||
Reference in New Issue
Block a user