DRA-272-Build-Admin-Dashboard#358
Conversation
…tics Backend: - Add AdminStatsProcessor singleton with overview, time-series, sync health, system health, and data source type breakdown queries - Add /admin/stats REST routes: overview, timeseries, sync-health, system-health, datasource-types — all guarded by admin JWT check - Mount /admin/stats in index.ts alongside existing admin routes - Extend SocketIODriver with join-admin-room event handler (admin-only verified against DB) and emitToRoom() public helper - Emit admin-stats-update over Socket.IO on user creation (UserManagementProcessor) and backup completion/failure (ScheduledBackupProcessor) Frontend: - Add frontend/types/admin/stats.ts — shared TypeScript interfaces (IAdminOverviewStats, IDataSourceSyncRow, ISystemHealthStatus, ITimeSeriesPoint) - Add useAdminStats composable: fetches overview/sync-health/system-health, subscribes to admin-stats-update Socket.IO event for live refresh; uses getAuthToken() and import.meta.client guards throughout - Add useAdminTimeSeries composable: fetches time-series and data source type breakdown endpoints on demand - Add AdminStatCard.vue — reusable stat card (icon, value, delta chip, optional quick-action link) - Add AdminChart.vue — SSR-safe D3 wrapper supporting line, bar, and donut chart types; cleans up SVG on unmount - Implement admin/index.vue with 8 sections: global health banner, user growth & activity, platform usage, data source sync health, AI usage metrics, system health & infrastructure, account cancellations & churn, and content management
There was a problem hiding this comment.
Pull request overview
This PR implements a comprehensive admin dashboard feature with real-time platform statistics. It adds a new AdminStatsProcessor singleton on the backend that aggregates data from multiple PostgreSQL tables and Redis, exposes it through 5 REST endpoints under /admin/stats, and integrates with Socket.IO for live refresh when users are created or backups complete. On the frontend, it adds a composable (useAdminStats), two reusable components (AdminStatCard, AdminChart with D3), and a full dashboard page with 8 sections covering user growth, platform usage, sync health, AI metrics, system health, cancellations, and content management.
Changes:
- Backend: New
AdminStatsProcessorwith overview/timeseries/sync-health/system-health/datasource-types queries, admin-guarded routes, Socket.IO room-based live updates via newemitToRoom()method - Frontend: New
useAdminStatscomposable with Socket.IO subscription,AdminStatCardandAdminChart(D3 line/bar/donut) components, redesignedadmin/index.vuedashboard page - Integration: Socket.IO events emitted on user creation and backup completion to push real-time updates to the admin dashboard
Reviewed changes
Copilot reviewed 11 out of 11 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
frontend/types/admin/stats.ts |
Shared TypeScript interfaces for admin dashboard data |
frontend/pages/admin/index.vue |
Complete admin dashboard page with 8 stat sections |
frontend/composables/useAdminStats.ts |
Composable for fetching overview/sync/health stats + Socket.IO live refresh |
frontend/components/AdminStatCard.vue |
Reusable stat card with icon, value, delta chip, and action link |
frontend/components/AdminChart.vue |
SSR-safe D3 wrapper for line, bar, and donut charts |
backend/src/processors/AdminStatsProcessor.ts |
Singleton processor with all admin stat query methods |
backend/src/routes/admin/stats.ts |
REST routes for admin stats, guarded by JWT + admin check |
backend/src/index.ts |
Mount /admin/stats route |
backend/src/drivers/SocketIODriver.ts |
Add join-admin-room event handler and emitToRoom() helper |
backend/src/processors/UserManagementProcessor.ts |
Emit admin-stats-update on user creation |
backend/src/processors/ScheduledBackupProcessor.ts |
Emit admin-stats-update on backup completion/failure |
| const [signups, projects, ai, cancellations, dsTypes] = await Promise.all([ | ||
| $fetch<any>(`${config.public.apiBase}/admin/stats/timeseries?metric=signups&days=30`, { headers: authHeaders() }), | ||
| $fetch<any>(`${config.public.apiBase}/admin/stats/timeseries?metric=projects&days=30`, { headers: authHeaders() }), | ||
| $fetch<any>(`${config.public.apiBase}/admin/stats/timeseries?metric=ai_messages&days=30`, { headers: authHeaders() }), | ||
| $fetch<any>(`${config.public.apiBase}/admin/stats/timeseries?metric=cancellations&days=30`, { headers: authHeaders() }), | ||
| $fetch<any>(`${config.public.apiBase}/admin/stats/datasource-types`, { headers: authHeaders() }), | ||
| ]); | ||
| if (signups.success) signupSeries.value = signups.data; | ||
| if (projects.success) projectSeries.value = projects.data; | ||
| if (ai.success) aiSeries.value = ai.data; | ||
| if (cancellations.success) cancellationSeries.value = cancellations.data; | ||
| if (dsTypes.success) { | ||
| dsTypeSeries.value = dsTypes.data.map((d: any) => ({ label: d.data_type, value: d.count })); | ||
| } |
There was a problem hiding this comment.
Promise.all at line 37 will reject if any single fetch fails, causing all five time-series requests to lose their results — even the ones that succeeded. Consider using Promise.allSettled instead, so successful responses are still displayed even if one endpoint fails. For example:
const results = await Promise.allSettled([...]);
results.forEach((r, i) => { if (r.status === 'fulfilled' && r.value.success) { /* assign */ } });
| const [signups, projects, ai, cancellations, dsTypes] = await Promise.all([ | |
| $fetch<any>(`${config.public.apiBase}/admin/stats/timeseries?metric=signups&days=30`, { headers: authHeaders() }), | |
| $fetch<any>(`${config.public.apiBase}/admin/stats/timeseries?metric=projects&days=30`, { headers: authHeaders() }), | |
| $fetch<any>(`${config.public.apiBase}/admin/stats/timeseries?metric=ai_messages&days=30`, { headers: authHeaders() }), | |
| $fetch<any>(`${config.public.apiBase}/admin/stats/timeseries?metric=cancellations&days=30`, { headers: authHeaders() }), | |
| $fetch<any>(`${config.public.apiBase}/admin/stats/datasource-types`, { headers: authHeaders() }), | |
| ]); | |
| if (signups.success) signupSeries.value = signups.data; | |
| if (projects.success) projectSeries.value = projects.data; | |
| if (ai.success) aiSeries.value = ai.data; | |
| if (cancellations.success) cancellationSeries.value = cancellations.data; | |
| if (dsTypes.success) { | |
| dsTypeSeries.value = dsTypes.data.map((d: any) => ({ label: d.data_type, value: d.count })); | |
| } | |
| const results = await Promise.allSettled([ | |
| $fetch<any>(`${config.public.apiBase}/admin/stats/timeseries?metric=signups&days=30`, { headers: authHeaders() }), | |
| $fetch<any>(`${config.public.apiBase}/admin/stats/timeseries?metric=projects&days=30`, { headers: authHeaders() }), | |
| $fetch<any>(`${config.public.apiBase}/admin/stats/timeseries?metric=ai_messages&days=30`, { headers: authHeaders() }), | |
| $fetch<any>(`${config.public.apiBase}/admin/stats/timeseries?metric=cancellations&days=30`, { headers: authHeaders() }), | |
| $fetch<any>(`${config.public.apiBase}/admin/stats/datasource-types`, { headers: authHeaders() }), | |
| ]); | |
| const [signupsResult, projectsResult, aiResult, cancellationsResult, dsTypesResult] = results; | |
| if (signupsResult.status === 'fulfilled' && signupsResult.value.success) { | |
| signupSeries.value = signupsResult.value.data; | |
| } else if (signupsResult.status === 'rejected') { | |
| console.error('[AdminDashboard] Failed to load signups time-series:', signupsResult.reason); | |
| } | |
| if (projectsResult.status === 'fulfilled' && projectsResult.value.success) { | |
| projectSeries.value = projectsResult.value.data; | |
| } else if (projectsResult.status === 'rejected') { | |
| console.error('[AdminDashboard] Failed to load projects time-series:', projectsResult.reason); | |
| } | |
| if (aiResult.status === 'fulfilled' && aiResult.value.success) { | |
| aiSeries.value = aiResult.value.data; | |
| } else if (aiResult.status === 'rejected') { | |
| console.error('[AdminDashboard] Failed to load AI messages time-series:', aiResult.reason); | |
| } | |
| if (cancellationsResult.status === 'fulfilled' && cancellationsResult.value.success) { | |
| cancellationSeries.value = cancellationsResult.value.data; | |
| } else if (cancellationsResult.status === 'rejected') { | |
| console.error('[AdminDashboard] Failed to load cancellations time-series:', cancellationsResult.reason); | |
| } | |
| if (dsTypesResult.status === 'fulfilled' && dsTypesResult.value.success) { | |
| dsTypeSeries.value = dsTypesResult.value.data.map((d: any) => ({ | |
| label: d.data_type, | |
| value: d.count, | |
| })); | |
| } else if (dsTypesResult.status === 'rejected') { | |
| console.error('[AdminDashboard] Failed to load data source type stats:', dsTypesResult.reason); | |
| } |
|
|
||
| export const useAdminStats = () => { | ||
| const config = useRuntimeConfig(); | ||
| const { $socket } = useNuxtApp() as any; |
There was a problem hiding this comment.
Bug: The socket plugin provides the instance as $socketio (see frontend/plugins/socketio.ts:227: nuxtApp.provide("socketio", socket)), but this code destructures it as $socket. Since $socket will always be undefined, the Socket.IO real-time stats subscription (join-admin-room emit and admin-stats-update listener) will never be set up.
Change const { $socket } = useNuxtApp() as any; to const { $socketio } = useNuxtApp() as any; and update all references from $socket to $socketio on lines 78, 79, 83, 89, and 90.
| OR connection_details->'api_connection_details'->'api_config'->>'last_sync' = 'null' | ||
| ) | ||
| )::int AS never_synced, | ||
| 0::int AS failed |
There was a problem hiding this comment.
The failedSources count is hardcoded to 0 (0::int AS failed on line 197), meaning the overview summary will always report 0 stale sources. However, the getSyncHealthData() method (lines 231-233) properly computes stale status using a 72-hour threshold. This inconsistency causes the summary bar in the frontend (showing failedSources as the stale count) to always show "0 stale" even when the detailed sync table shows stale entries.
Consider computing stale sources in the SQL query using the same 72-hour threshold, e.g.:
COUNT(*) FILTER (WHERE data_type NOT IN (...) AND connection_details->'api_connection_details'->'api_config'->>'last_sync' IS NOT NULL AND ... < NOW() - INTERVAL '72 hours')::int AS failed
| 0::int AS failed | |
| COUNT(*) FILTER ( | |
| WHERE data_type NOT IN ('postgresql','mysql','mariadb','mongodb','csv','excel','pdf') | |
| AND connection_details->'api_connection_details'->'api_config'->>'last_sync' IS NOT NULL | |
| AND connection_details->'api_connection_details'->'api_config'->>'last_sync' <> 'null' | |
| AND (connection_details->'api_connection_details'->'api_config'->>'last_sync')::timestamptz < NOW() - INTERVAL '72 hours' | |
| )::int AS failed |
| export class AdminStatsProcessor { | ||
| private static instance: AdminStatsProcessor; | ||
|
|
||
| private constructor() {} | ||
|
|
||
| public static getInstance(): AdminStatsProcessor { | ||
| if (!AdminStatsProcessor.instance) { | ||
| AdminStatsProcessor.instance = new AdminStatsProcessor(); | ||
| } | ||
| return AdminStatsProcessor.instance; | ||
| } |
There was a problem hiding this comment.
The AdminStatsProcessor introduces multiple query methods and status computation logic (e.g., stale detection via 72-hour threshold, sync health classification), yet no tests are included. Other processors in this codebase have corresponding test files (e.g., AuthProcessor.test.ts, DataModelProcessor.unit.test.ts, UserManagementProcessor.test.ts). Consider adding unit tests to cover at least the getSyncHealthData status classification logic and the getTimeSeriesData metric validation/sanitization.
| async getSyncHealthData(): Promise<IDataSourceSyncRow[]> { | ||
| const manager = AppDataSource.manager; | ||
| const rows = await manager.query(` | ||
| SELECT | ||
| ds.id, | ||
| ds.name, | ||
| ds.data_type, | ||
| ds.created_at, | ||
| u.email AS owner_email, | ||
| ds.connection_details->'api_connection_details'->'api_config'->>'last_sync' AS last_sync | ||
| FROM dra_data_sources ds | ||
| LEFT JOIN dra_users_platform u ON ds.users_platform_id = u.id | ||
| ORDER BY ds.id DESC | ||
| LIMIT 200 | ||
| `); |
There was a problem hiding this comment.
Critical: When encryption is enabled (the default in production), the connection_details column stores encrypted JSONB with the structure {"version":1,"iv":"...","authTag":"...","encrypted":"..."} — not the original nested structure. This means the JSON path connection_details->'api_connection_details'->'api_config'->>'last_sync' will always return NULL for encrypted rows, causing all API-integrated sources to show as "never synced" in both the summary and the detail table.
To fix this, you need to fetch data sources through TypeORM (which applies the decryption transformer automatically) rather than using raw SQL with JSON operators on the encrypted column. For example, use manager.find(DRADataSource, { relations: ['users_platform'] }) and extract last_sync from the decrypted connection_details in application code. The same issue affects getSyncHealthData() (line 217) and querySyncHealthSummary() (lines 193-194).
| private async querySyncHealthSummary(manager: any) { | ||
| const rows = await manager.query(` | ||
| SELECT | ||
| COUNT(*)::int AS total, | ||
| COUNT(*) FILTER ( | ||
| WHERE data_type NOT IN ('postgresql','mysql','mariadb','mongodb','csv','excel','pdf') | ||
| AND ( | ||
| connection_details->'api_connection_details'->'api_config'->>'last_sync' IS NULL | ||
| OR connection_details->'api_connection_details'->'api_config'->>'last_sync' = 'null' | ||
| ) | ||
| )::int AS never_synced, | ||
| 0::int AS failed | ||
| FROM dra_data_sources | ||
| `); |
There was a problem hiding this comment.
Same encryption issue as in getSyncHealthData(): the JSON path connection_details->'api_connection_details'->'api_config'->>'last_sync' will always return NULL when the column contains encrypted data (which is the default in production). The never_synced counter will therefore be inflated, counting all API-integrated sources as "never synced" regardless of actual sync status. Use TypeORM entities with the decryption transformer instead of raw SQL JSON operators on this column.
| .attr('fill', '#111827') | ||
| .text(String(total)); | ||
|
|
||
| return; |
There was a problem hiding this comment.
The donut chart branch returns early at line 71 without setting the cleanup callback. For the line and bar chart types, cleanup is set at lines 142–144 to remove the SVG. This means that when the data changes for a donut chart, the old SVG is only removed via container.value.innerHTML = '' (line 20), but cleanup?.() on line 18 does nothing because it's still null from the previous donut render. While innerHTML = '' covers re-renders, the inconsistency could cause issues if cleanup logic is extended later. Move the cleanup = () => { svg.remove(); } assignment right after the SVG is created (after line 34), so all chart types benefit from it consistently.
| const keys = await redis.keys('dra:ai:*session*'); | ||
| activeRedisSessions = keys.length; |
There was a problem hiding this comment.
redis.keys() is an O(N) operation that blocks Redis while scanning all keys. While this is an admin-only endpoint and the codebase uses redis.keys() in other places, consider using SCAN (e.g., redis.scanIterator({ MATCH: 'dra:ai:*session*' })) if the keyspace grows large. This is especially relevant here since this endpoint is called on every dashboard load and via Socket.IO live refresh, potentially creating more frequent calls than the other redis.keys usage sites (which are for cleanup/deletion tasks).
| const keys = await redis.keys('dra:ai:*session*'); | |
| activeRedisSessions = keys.length; | |
| for await (const _key of redis.scanIterator({ MATCH: 'dra:ai:*session*' })) { | |
| activeRedisSessions++; | |
| } |
| export const useAdminTimeSeries = (metric: string, days = 30) => { | ||
| const config = useRuntimeConfig(); | ||
| const data = ref<ITimeSeriesPoint[]>([]); | ||
| const dsTypeBreakdown = ref<{ data_type: string; count: number }[]>([]); | ||
| const isLoading = ref(false); | ||
|
|
||
| const authHeaders = (): Record<string, string> => { | ||
| const token = getAuthToken(); | ||
| if (!token) throw new Error('Authentication required'); | ||
| return { | ||
| Authorization: `Bearer ${token}`, | ||
| 'Authorization-Type': 'auth', | ||
| }; | ||
| }; | ||
|
|
||
| const fetchTimeSeries = async () => { | ||
| const res = await $fetch<{ success: boolean; data: ITimeSeriesPoint[] }>( | ||
| `${config.public.apiBase}/admin/stats/timeseries?metric=${metric}&days=${days}`, | ||
| { headers: authHeaders() } | ||
| ); | ||
| if (res.success) data.value = res.data; | ||
| }; | ||
|
|
||
| const fetchDsTypeBreakdown = async () => { | ||
| const res = await $fetch<{ success: boolean; data: { data_type: string; count: number }[] }>( | ||
| `${config.public.apiBase}/admin/stats/datasource-types`, | ||
| { headers: authHeaders() } | ||
| ); | ||
| if (res.success) dsTypeBreakdown.value = res.data; | ||
| }; | ||
|
|
||
| onMounted(async () => { | ||
| if (!import.meta.client) return; | ||
| isLoading.value = true; | ||
| try { | ||
| if (metric === 'datasource_types') { | ||
| await fetchDsTypeBreakdown(); | ||
| } else { | ||
| await fetchTimeSeries(); | ||
| } | ||
| } catch (err) { | ||
| console.error('[useAdminTimeSeries] Load failed:', err); | ||
| } finally { | ||
| isLoading.value = false; | ||
| } | ||
| }); | ||
|
|
||
| return { data, dsTypeBreakdown, isLoading }; | ||
| }; |
There was a problem hiding this comment.
The useAdminTimeSeries composable is defined and exported but never used anywhere. The admin dashboard page (admin/index.vue) fetches time-series data directly via its own fetchAllSeries function using inline $fetch calls. This dead code adds confusion for future maintainers. Either remove this unused composable or refactor the page to use it instead of duplicating the fetch logic.
|
@copilot open a new pull request to apply changes based on the comments in this thread |
|
@mustafaneguib I've opened a new pull request, #359, to work on those changes. Once the pull request is ready, I'll request review from you. |
Description
feat(admin): implement admin dashboard with real-time platform statistics
Backend:
Frontend:
Fixes: # (issue)
Type of Change
Please delete options that are not relevant:
How Has This Been Tested?
Please describe the tests that you ran to verify your changes.
Provide instructions so we can reproduce and validate the behavior.
Checklist
Please check all the boxes that apply:
Screenshots (if applicable)