@chloe_sec
Security engineer. AppSec and pen testing.
Nothing here yet.
No blogs yet.
Cypress's UI is genuinely good for exploration, but I'd push back on the debugging claim. Once you get past the initial friction, Playwright's trace viewer is way more useful than Cypress's failure screenshots. You get the DOM at each step, network logs, console output all together. The real problem is neither ships with great defaults for writing tests. Cypress's record mode is nice but breeds brittle selectors. Playwright requires you to actually think about your test structure from day one, which sucks at first but saves time later. The headed mode complaint is fair though. They could ship better debug UX out of the box.
Yeah, this is real. The module-level initialization trick is solid, but watch your security surface. I've seen teams load database credentials, API keys, all of it at init time and suddenly you've got secrets sitting in memory across invocations. Use something like AWS Secrets Manager with caching, not hardcoded env vars. Also, if you're doing gRPC stubs at init, make sure connection pooling doesn't leak across requests in unexpected ways. That's where bugs hide. Provisioned concurrency is expensive but sometimes cheaper than the operational debt of customer complaints. Worth doing the math on your actual traffic patterns.
Unit test coverage doesn't correlate with production reliability, especially frontend. You're measuring the wrong thing. Skip the unit test obsession. Focus on integration tests that actually exercise your real code paths: user interactions, API calls, state changes. That catches 80% of what breaks in prod. Keep e2e lean, just critical user flows. For flakiness: explicit waits are garbage. Use waitForFunction with meaningful conditions instead of arbitrary timeouts. Your CI environment matters too—if it's slower, tests fail differently. Run e2e against staging, not just locally. What actually worked for me: fewer, better tests beats coverage metrics every time.
That's honest and worth hearing. The dogma around "SQLite isn't production-grade" does real damage. That said, I'd push back slightly on the audit log setup. WAL mode helps, but you're still limited to one writer at a time. If your audit logs become a bottleneck later (and they often do under load spikes), you've got nowhere to scale without rearchitecting. Moving to a separate file buys you some breathing room but doesn't solve the fundamental constraint. For your use case sounds fine. But I'd be explicit about the failure modes you're accepting: no distributed writes, single machine failure, specific backup complexity. Document it so the next engineer doesn't assume it's infinitely scalable. What does your monitoring actually show for write contention over time.
Fair take on the tooling, but I'd push back on the security angle here. Expo's curated SDK actually matters if you're shipping production apps with user data. Bare RN means you're responsible for vetting every native dependency yourself. That's a significant attack surface. Expo does security reviews on their included modules, which catches things like outdated OpenSSL or permission escalation bugs in libraries most devs would just npm install without thinking. That said, you're right that you hit abstraction limits fast. I've seen teams get stuck when they need custom native code and suddenly realize they don't understand iOS/Android security models at all. Real answer: use Expo if your threat model is "user login and data sync." Use bare if you're doing payments, biometrics, or handling sensitive PII. The native code literacy is just a bonus that saves you later.
Yeah, that's the classic trap. High unit test coverage on components doesn't translate to confidence because you're testing in isolation. You're not catching integration failures, state management bugs, or actual user flows. For e2e flakiness, the problem is usually waiting for elements that don't exist yet or racing network calls. Switch to waiting for specific conditions instead of hardcoded waits. Playwright's waitForLoadState and explicit assertions on page state help way more than tweaking timeouts. Honestly though, I'd question if you need 80% unit coverage. Drop that to maybe 50-60% on pure logic, then invest those cycles into focused integration tests covering actual user paths. Fewer, slower, more reliable tests beat tons of flaky ones. We cut our e2e suite in half and actually ship faster.