When I started building a custom Drupal module to integrate with an external PDF generation service, I decided to test my theory that AI assistance would make implementing automated tests both faster and more accessible.
I’ve written Drupal code for years, but automated testing is often sacrificed when deadlines loom. With each manual test cycle taking increasingly more time as projects grew more complex, I wondered if AI could help me break this pattern. Could I finally get both rapid development and thorough testing?
So I gave it a shot.
I expected AI to simply churn out working tests. Instead, I learned how to make it a reliable test-writing partner.
AI as a test-writing assistant
My first move was simple: Ask AI to generate some tests for my module.
What I got was… mixed.
On the plus side, AI gave me scaffolding fast. Instead of staring at a blank file, I had a PHPUnit class with methods and even some mocks ready to go. In addition, it is good at analyzing a custom module and suggesting tests for various aspects of the code.
But the drafts often missed important details:
- Forgetting required modules
- Mishandling methods
- Trying to mock services in ways that don’t work in Drupal functional tests
In short, AI made it easier to start, but not to finish. That’s when I began building a Cursor rules file: a set of guardrails that capture what AI consistently gets wrong, so it can produce better results next time.
Starting simple: Unit tests
If you’re new to Drupal testing, this is the easiest place to start. These don’t boot Drupal; they just test a class in isolation.
For example, my module had a slug generator service that turned titles into clean, URL-safe strings. A unit test for that looked like this:
<?php namespace Drupal\Tests\my_module\Unit; use Drupal\Tests\UnitTestCase; use Drupal\my_module\PdfService; class SlugGeneratorTest extends UnitTestCase { /** * Test slug generation. * * @covers ::generateSlug * @dataProvider slugProvider */ public function testGenerateSlug(string $input, string $expected): void { $reflection = new \ReflectionClass($this->PdfService); $method = $reflection->getMethod('generateSlug'); $method->setAccessible(TRUE); $result = $method->invoke($this->PdfService, $input); $this->assertEquals($expected, $result); } /** * Data provider for slug generation tests. */ public static function slugProvider(): array { return [ ['Title with Special@#$% Characters!', 'title-with-special-characters'], [' Leading and trailing spaces ', 'leading-and-trailing-spaces'], ['Multiple Spaces', 'multiple-spaces'], [str_repeat('Very Long Title ', 20), 'very-long-title-very-long-title-very-long-title-very-long-title-very-long-title-very-long-title-very'], // Test HTML tag stripping. ['<p>Title with HTML tags</p>', 'title-with-html-tags'], ]; } }
This runs in milliseconds. If I change the slugging logic later, the test tells me instantly whether I broke it.
AI helped scaffold this, but skipped some use
statements. Small fixes like this kept coming up — enough that I started adding them to my ruleset.
Getting serious: Kernel tests
Next, I wanted to test how my module’s services integrated with Drupal’s APIs. For this, kernel tests were the right fit: They boot the service container and database layer without going through the full HTTP stack.
Here’s an example where I tested that my service is triggered on node save:
<?php namespace Drupal\Tests\my_module\Kernel; use Drupal\my_module\Service\PdfService; use Drupal\Tests\KernelTests\KernelTestBase; /** * @group my_module */ class PdfServiceTest extends KernelTestBase { protected static $modules = [ 'system', 'user', 'field', 'file', 'node', 'text', 'my_module', ]; protected function setUp(): void { parent::setUp(); $this->installEntitySchema('user'); $this->installEntitySchema('node'); $this->installEntitySchema('file'); $this->installSchema('file', ['file_usage']); $this->installSchema('node', ['node_access']); $this->installConfig(['field', 'node', 'file']); } /** * Test that PDF generation is triggered for valid nodes. */ public function testPdfGenerationTriggered(): void { $node = Node::create([ 'type' => 'article', 'title' => 'Test Article', 'field_show_pdf_link' => 1, 'status' => 1, ]); // Mock the PDF service. $mock_service = $this->createMock(\Drupal\my_module\Service\PdfService::class); // Should call the main generation method once. $mock_service->expects($this->once()) ->method('generatePdfIfNeeded') ->with($node); $this->container->set('myModule.pdfService', $mock_service); $node->save(); } }
The AI-generated draft left out node_access
, file_usage
, and the text
module, causing immediate crashes. Once I added rules requiring these every time, my results improved dramatically.
End-to-end confidence: Functional tests
Finally, I wanted to be sure my module behaved correctly in a real browser-like context. For that, I wrote functional tests. They boot Drupal’s full stack and simulate user actions.
Here, I used a functional test to check that the service was called (or not) on node save.
namespace Drupal\Tests\my_module\Functional; use Drupal\field\Entity\FieldConfig; use Drupal\field\Entity\FieldStorageConfig; use Drupal\file\Entity\File; use Drupal\node\Entity\Node; use Drupal\node\Entity\NodeType; use Drupal\Tests\BrowserTestBase; /** * Tests for the PDF controller routes. */ class PdfControllerTest extends BrowserTestBase { /** * {@inheritdoc} */ protected static $modules = [ 'system', 'user', 'node', 'field', 'file', 'text', 'filter', 'my_module', ]; /** * A test node. * * @var \Drupal\node\NodeInterface */ protected $testNode; /** * {@inheritdoc} */ protected function setUp(): void { parent::setUp(); // Ensure all modules are properly installed. $this->container->get('module_installer')->install([ 'system', 'user', 'node', 'field', 'file', 'text', 'filter', 'my_module' ]); // ...Create article content type with correct field. // Mock the PdfService to avoid real API calls. $mock_service = $this->createMock(\Drupal\my_module\Service\PdfService::class); $mock_service->method('generatePdfForNode') ->willReturn(TRUE); $mock_service->method('getOrGenerateNodePdf') ->willReturn([ 'file_url' => 'https://example.com/test.pdf', 'status' => 'success', ]); $mock_service->method('isGeneratingPdf')->willReturn(FALSE); $mock_service->method('markGeneratingPdf'); $mock_service->method('unmarkGeneratingPdf'); $this->container->set('my_module.pdfservice', $mock_service); // Rebuild container to ensure our mock service is used. $this->rebuildContainer(); // Create a test node (don't trigger PDF generation by not setting show_pdf_link). $this->testNode = Node::create([ 'type' => 'article', 'title' => 'Test Article for PDF', 'field_show_pdf_link' => 0, // Disabled to avoid triggering hooks 'status' => 1, ]); $this->testNode->save(); } /** * Test that PDF file can be attached to nodes properly. */ public function testPdfViewRoute(): void { // Test that we can create and attach a file to the node. $file = File::create([ 'filename' => 'test.pdf', 'uri' => 'public://test.pdf', 'status' => 1, ]); // Create actual file content. $directory = dirname($file->getFileUri()); $this->container->get('file_system')->prepareDirectory($directory, \Drupal\Core\File\FileSystemInterface::CREATE_DIRECTORY); file_put_contents($file->getFileUri(), 'dummy pdf content'); $file->save(); $this->testNode->set('field_pdf', ['target_id' => $file->id()]); $this->testNode->save(); // Test that the file was properly attached. $attached_file_id = $this->testNode->get('field_pdf')->target_id; $this->assertEquals($file->id(), $attached_file_id); // Test that the file exists and has the expected properties. $loaded_file = File::load($attached_file_id); $this->assertNotNull($loaded_file); $this->assertEquals('test.pdf', $loaded_file->getFilename()); $this->assertTrue(file_exists($loaded_file->getFileUri())); } }
At first, AI tried to test full routes and hit the real API, which caused endless issues. I learned to keep functional tests focused: confirm installation, fields, and basic service availability. Anything more complex belongs in unit or kernel tests.
The real time savings
So, did writing tests using AI save me time?
At first, no. Writing tests (and wrangling AI into producing useful ones) definitely added time to my development process. But as I did further testing on the service, I really felt the value of being able to run the tests. Faster iteration, safer upgrades, less stress.
And the bigger the project, the more future hours those tests will save.
Turning lessons into cursor rules
After going back and forth with AI, I distilled what I’d learned into a set of Cursor rules. This way, instead of fixing the same mistakes over and over, I could keep refining these rules, and reuse them every time I asked it to generate a new test.
Here’s the distilled version I now keep in .cursor/rules/drupal-test-rules.mdc
. Feel free to copy and adapt it for your own projects. To make it the most effective, keep refining it as you go.
Note: When I first added the rules, I asked it to generate a new test and there were some of the same errors that the rules are designed to prevent. I then asked it if it was following the rules. It reprimanded itself for not following them, and then started talking about them excessively. I had to create a new rule to get it to stop talking about the rules so much!
--- description: Creating automated unit, kernel, or functional tests alwaysApply: false --- ## Drupal automated test guidelines When generating Drupal tests, always follow these patterns: ### Required dependencies (add to composer.json require-dev): - "phpunit/phpunit": "*" - "phpspec/prophecy-phpunit": "*" - "mikey179/vfsstream": "*" - "behat/mink": "*" - "behat/mink-browserkit-driver": "*" - "symfony/browser-kit": "*" ### PHPUnit configuration (phpunit.xml): - Use bootstrap="web/core/tests/bootstrap.php" - Set environment variables: SYMFONY_DEPRECATIONS_HELPER, DRUPAL_TEST_BASE_URL, SIMPLETEST_DB, SIMPLETEST_BASE_URL - Define testsuites for unit, kernel, functional ### Unit tests When creating Drupal unit tests: - Extend Drupal\Tests\UnitTestCase - Include ALL required use statements for Drupal interfaces - Mock external dependencies (HTTP clients, loggers, entity managers) - Make data provider methods static - Use reflection for testing private/protected methods - Never include actual Drupal bootstrap or database operations ### Kernel tests 🚨 **BEFORE WRITING KERNEL TESTS:** 1. **Mock external services FIRST** in setUp() 2. **Test only specific integration, not full workflows** When creating Drupal kernel tests: - Extend Drupal\Tests\KernelTests\KernelTestBase - Required modules: ['system', 'field', 'file', 'node', 'text', 'user', 'MODULE_NAME'] - Install schemas: installEntitySchema('user'), installEntitySchema('node'), installEntitySchema('file') - Install schemas: installSchema('file', ['file_usage']), installSchema('node', ['node_access']) - Install config: installConfig(['field', 'node', 'file']) - Test service integration without full Drupal bootstrap ### Functional tests When creating Drupal functional tests: - Extend Drupal\Tests\BrowserTestBase - Include all required modules: ['system', 'user', 'node', 'field', 'file', 'text', 'filter', 'MODULE_NAME'] - Mock external services to avoid real API calls - Handle void methods correctly (no return values) - Use \Drupal\Core\File\FileSystemInterface::CREATE_DIRECTORY for file operations #### Critical strategies: - CRITICAL: Disable module hooks during test setup to avoid triggering real services - Service mocking in functional tests is unreliable - prefer testing without triggering hooks - Use account_switcher for authentication instead of drupalLogin() when possible - Focus on testing module installation, field configuration, and service availability rather than full route integration #### Service mocking: - Return values: $mock->method('name')->willReturn($value) - Void methods: $mock->method('name') (no willReturn) - Never called: $mock->expects($this->never())->method('name') - Functional test mocking is fragile - prefer testing without triggering complex workflows #### Test strategy: - Test what functional tests do best: module installation, field setup, service availability - Avoid testing full integration workflows that trigger external services - Use unit tests for service logic, kernel tests for hook integration, functional tests for UI/installation - Route testing is often testing Drupal core, not custom logic ### Critical requirements: - Always include 'text' module for text_with_summary fields - Always install node_access schema for node operations - Data providers must be static methods - Mock all external API services in tests ### Warning: Functional tests + module hooks + external services = complexity - design tests to avoid this combination
Final thoughts
This started as an experiment: Could I save time on a complex Drupal module by leaning on AI to help me write tests?
The answer turned out to be yes — with conditions. AI rarely produced perfect tests, but it gave me a faster starting point than writing everything from scratch. By capturing the lessons in a reusable ruleset, I turned the AI into a consistent assistant instead of a wildcard.
The bigger lesson is that tests aren’t “one and done.” As modules evolve, your tests need to evolve, too. Building the habit of writing and updating tests as part of your workflow pays off in faster debugging, safer upgrades, and more confidence in every release.
If you’ve been putting off testing because it feels like a luxury, try training your AI to help you write them. The result won’t be flawless, but it can lower the barrier enough to help you build the habit — and over time, that habit will not only speed up development, but also save you from the endless cycle of manual retesting.
With the right habits (and a little AI help), testing stops being overhead and starts being a timesaver.
Making the web a better place to teach, learn, and advocate starts here...
When you subscribe to our newsletter!