Home / Side Quests / Resume Tailor

Resume Tailor

Estimated time: 4–5 hours | Difficulty: Intermediate

Side Quest

What You Will Build

  • Connect your Resumator to an LLM API (OpenAI, Anthropic, Google, or Ollama)
  • Build a Spring Boot service that sends job descriptions and resumes to an AI model
  • Design a UI with a modal for pasting job descriptions and generating tailored resume suggestions
  • Learn prompt engineering to dramatically improve AI output quality
  • Understand responsible AI use in the job application process

Deliverable

A working "Tailor Resume" feature in your Resumator that accepts a job description and your current resume, sends them to an LLM API, and returns personalized suggestions for how to adjust your resume to better match the role.

1. Why Tailor Your Resume?

Sending the same resume to every job application is one of the most common mistakes job seekers make. It feels efficient — you write one great resume, polish it until it shines, and then blast it out to fifty companies. But here is the uncomfortable truth: a generic resume is a missed opportunity. Every single time.

Think about it from the other side of the desk. A recruiter at a mid-sized company receives two hundred applications for a single opening. They do not read each resume carefully from top to bottom. Research consistently shows that recruiters spend between 6 and 10 seconds scanning a resume before deciding whether to keep reading or move on. Six seconds. That is barely enough time to read your name, your most recent job title, and a handful of bullet points. If the words they are scanning for do not jump off the page immediately, your resume goes into the "no" pile. It does not matter how qualified you are.

But it gets worse. Before a human even sees your resume, it often has to pass through an Applicant Tracking System (ATS). An ATS is software that companies use to manage job applications at scale. When you submit your resume through a company's careers page, the ATS parses it, extracts keywords, and scores it against the job description. If your resume does not contain enough matching keywords, it gets filtered out automatically. A real human never sees it. You could be the perfect candidate, and you would never know you were rejected — by a machine.

This is why tailoring matters. When you tailor your resume, you are not fabricating experience or lying about your skills. Tailoring means emphasizing the parts of your real experience that are most relevant to this specific job. If the job description mentions "REST APIs" six times and your resume buries that experience in the last bullet point of your second job, you move it up. If the posting asks for "cross-functional collaboration" and you have done plenty of it but describe it as "working with different teams," you adjust your language. Same experience, better presentation.

The problem is that tailoring is tedious. Reading a job description carefully, identifying the key requirements, cross-referencing them against your resume, rewriting bullet points — it takes 20 to 30 minutes per application if you do it properly. Multiply that by the dozens of jobs you apply to during a serious job search, and you understand why most people give up and send the same resume everywhere.

That is exactly the problem we are going to solve. You are going to build a feature that uses an LLM (Large Language Model) to analyze a job description alongside your resume and generate specific, actionable suggestions for how to tailor it. Not a rewrite — suggestions. You remain in control. The AI does the tedious analysis; you make the final decisions.

By the end of this side quest, you will have a "Tailor Resume" button on every job card in your Resumator. Click it, and a modal appears with the job description pre-filled. Paste your resume, hit generate, and within seconds you have a list of specific changes: which bullet points to reword, which keywords to add, which sections to emphasize. You save 20 minutes per application while producing better results than you could manually. That is the power of combining your software engineering skills with AI capabilities.

2. Choosing an LLM API

Before you can build anything, you need to decide which LLM to use. There are several excellent options, and the good news is that the code pattern is nearly identical regardless of which one you choose. Here are your main options:

For this side quest, we will write the code using OpenAI's API as the primary example, but we will structure the service so you can swap in any provider with minimal changes. The concepts are identical across all providers: you send an HTTP POST request with your prompt in the body, you receive a JSON response with the generated text. The URL, headers, and JSON structure differ slightly between providers, but the pattern is the same. This is an important insight: once you learn to integrate one REST API, integrating any other REST API is just a matter of reading the documentation.

A note on cost: LLM API calls are not free (except Ollama). However, for this feature each call costs fractions of a cent. A typical resume tailoring request uses roughly 1,000 to 2,000 tokens, which costs less than $0.01 on most providers. Even during heavy development and testing, you are unlikely to spend more than a few dollars total. Most providers also offer free credits when you sign up.

Getting Your API Key

Whichever provider you choose, you will need an API key. The process is the same pattern you followed when setting up the JSearch API key for your Resumator:

  1. Create an account on your chosen provider's platform
  2. Navigate to the API keys section (usually under Settings or API)
  3. Generate a new key
  4. Copy it immediately — most providers only show it once

Store your API key in application.properties, the same file where you keep your JSearch key:

# application.properties

# Existing JSearch key
jsearch.api.key=your-jsearch-key-here

# LLM API key (choose one)
llm.api.key=your-api-key-here
llm.api.url=https://api.openai.com/v1/chat/completions
llm.model=gpt-4o-mini
Never commit API keys to Git. Your application.properties file should already be listed in .gitignore. If it is not, add it now. Leaked API keys can be exploited within minutes, and you will be billed for someone else's usage. This is not theoretical — it happens constantly to developers who push keys to public repositories.

By using application.properties, Spring Boot will inject these values into your service automatically using the @Value annotation — the same pattern you used for the JSearch API key. Same pattern, different API. This is the power of learning patterns rather than memorizing specific implementations.

3. Building the Service

Now for the core of this feature: a service that takes a job description and a resume, sends them to an LLM, and returns tailored suggestions. This is a Spring Boot @Service class, exactly like the services you have already built for job searching.

The ResumeTailorService

Create a new file called ResumeTailorService.java in your service package:

package com.example.resumator.service;

import org.springframework.beans.factory.annotation.Value;
import org.springframework.http.*;
import org.springframework.stereotype.Service;
import org.springframework.web.client.RestTemplate;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;

@Service
public class ResumeTailorService {

    @Value("${llm.api.key}")
    private String apiKey;

    @Value("${llm.api.url}")
    private String apiUrl;

    @Value("${llm.model}")
    private String model;

    private final RestTemplate restTemplate;
    private final ObjectMapper objectMapper;

    public ResumeTailorService(RestTemplate restTemplate,
                               ObjectMapper objectMapper) {
        this.restTemplate = restTemplate;
        this.objectMapper = objectMapper;
    }

    public String tailorResume(String jobDescription,
                               String currentResume) {
        String prompt = buildPrompt(jobDescription, currentResume);
        return callLlmApi(prompt);
    }

    private String buildPrompt(String jobDescription,
                                String currentResume) {
        return """
            You are an expert career coach and resume writer.
            Your task is to analyze a job description and a resume,
            then provide specific, actionable suggestions for how
            to tailor the resume to better match the job.

            RULES:
            - Do NOT rewrite the entire resume
            - Provide 5-8 specific suggestions
            - Each suggestion should reference a specific part of
              the resume and a specific requirement from the job
            - Suggest keyword additions from the job description
            - Recommend reordering or rephrasing bullet points
            - Never suggest fabricating experience
            - Format each suggestion as a clear, numbered item

            === JOB DESCRIPTION ===
            %s

            === CURRENT RESUME ===
            %s

            Provide your suggestions now:
            """.formatted(jobDescription, currentResume);
    }
}

Let us walk through what this code does. The @Value annotations pull configuration from application.properties — you have seen this pattern before. The RestTemplate handles HTTP requests to the LLM API, and the ObjectMapper handles JSON parsing. The tailorResume method is the public entry point: give it a job description and a resume, and it returns suggestions.

The buildPrompt method is where the magic happens. Notice how specific the instructions are: we tell the model exactly what we want (numbered suggestions), what we do not want (no full rewrites), and set constraints (never fabricate experience). The quality of these instructions directly determines the quality of the output. We will improve this prompt significantly in Part 5.

Calling the LLM API

Now add the method that actually makes the API call:

private String callLlmApi(String prompt) {
    HttpHeaders headers = new HttpHeaders();
    headers.setContentType(MediaType.APPLICATION_JSON);
    headers.setBearerAuth(apiKey);

    // Build the request body for OpenAI's chat completions format
    String requestBody;
    try {
        var body = objectMapper.createObjectNode();
        body.put("model", model);
        body.put("max_tokens", 1500);
        body.put("temperature", 0.7);

        var messages = body.putArray("messages");
        var userMessage = messages.addObject();
        userMessage.put("role", "user");
        userMessage.put("content", prompt);

        requestBody = objectMapper.writeValueAsString(body);
    } catch (Exception e) {
        throw new RuntimeException(
            "Failed to build API request", e);
    }

    HttpEntity<String> request =
        new HttpEntity<>(requestBody, headers);

    try {
        ResponseEntity<String> response =
            restTemplate.exchange(
                apiUrl,
                HttpMethod.POST,
                request,
                String.class
            );

        // Parse the response to extract the generated text
        JsonNode root =
            objectMapper.readTree(response.getBody());
        return root
            .path("choices")
            .get(0)
            .path("message")
            .path("content")
            .asText();

    } catch (Exception e) {
        throw new RuntimeException(
            "LLM API call failed: " + e.getMessage(), e);
    }
}

This follows the exact same pattern as your JSearch API calls: set headers, build a request body, make an HTTP call, and parse the JSON response. The only difference is the shape of the request and response. OpenAI expects a messages array with roles, and returns the generated text nested inside choices[0].message.content. If you use Anthropic or Google, the JSON structure is slightly different, but the pattern is identical.

Error Handling

LLM APIs can fail in ways that your JSearch API calls probably have not. These are external services maintained by other companies, and they have their own reliability characteristics. You need to handle failures gracefully so your users see a helpful message instead of a broken page.

A key principle here: never show raw exception messages to users. An error like HttpClientErrorException: 429 Too Many Requests means nothing to someone using your application. Translate every error into plain language: "The AI service is busy right now. Please wait a moment and try again." This is a professional habit that distinguishes polished applications from student projects.

Now add a controller endpoint so your frontend can reach this service:

package com.example.resumator.controller;

import com.example.resumator.service.ResumeTailorService;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.*;

import java.util.Map;

@RestController
@RequestMapping("/api/tailor")
public class ResumeTailorController {

    private final ResumeTailorService tailorService;

    public ResumeTailorController(
            ResumeTailorService tailorService) {
        this.tailorService = tailorService;
    }

    @PostMapping
    public ResponseEntity<Map<String, String>> tailor(
            @RequestBody Map<String, String> request) {

        String jobDescription =
            request.get("jobDescription");
        String resume = request.get("resume");

        if (jobDescription == null
                || jobDescription.isBlank()) {
            return ResponseEntity.badRequest().body(
                Map.of("error",
                    "Job description is required"));
        }
        if (resume == null || resume.isBlank()) {
            return ResponseEntity.badRequest().body(
                Map.of("error", "Resume is required"));
        }

        try {
            String suggestions =
                tailorService.tailorResume(
                    jobDescription, resume);
            return ResponseEntity.ok(
                Map.of("suggestions", suggestions));
        } catch (Exception e) {
            return ResponseEntity.internalServerError()
                .body(Map.of("error",
                    "Failed to generate suggestions. "
                    + "Please try again later."));
        }
    }
}

This controller receives a POST request with a JSON body containing the job description and resume, validates both fields, calls the service, and returns the suggestions. The error handling ensures users always see a friendly message rather than a stack trace. This is the same controller pattern you have used throughout the Resumator — nothing new here except the endpoint it calls.

Notice the input validation at the top of the method. We check that both the job description and resume are present and non-blank before calling the service. This is a defensive programming habit you should apply to every controller endpoint: validate inputs before processing. Without this check, a request with a missing resume would still call the LLM API, waste tokens, and return nonsensical suggestions — a bad user experience and a waste of money.

Also notice the generic Map<String, String> used for both the request and response. For a simple feature like this, a Map is perfectly adequate. In a larger application, you might define dedicated request and response DTOs (Data Transfer Objects) for type safety and documentation. Either approach works — use your judgment based on the complexity of the data.

4. The UI

You have a working backend. Now you need a way for users to actually use it. The UI for this feature has three parts: a button on each job card, a modal dialog for input, and JavaScript to tie it all together.

The "Tailor Resume" Button

Add a "Tailor Resume" button to each job card in your search results. This sits alongside your existing "Save to Favorites" button:

<!-- Add this inside your job card template -->
<div class="job-card-actions">
  <button class="btn btn-primary btn-save-favorite"
          data-job-id="${job.id}">
    Save to Favorites
  </button>
  <button class="btn btn-secondary btn-tailor-resume"
          data-job-id="${job.id}"
          data-job-description="${job.description}">
    Tailor Resume
  </button>
</div>

The Tailor Resume Modal

When the user clicks "Tailor Resume," a modal appears with the job description pre-filled and a textarea for their resume. Add this HTML to the bottom of your main page, before the closing </body> tag:

<!-- Resume Tailor Modal -->
<div id="tailor-modal" class="modal-overlay" hidden>
  <div class="modal-content">
    <div class="modal-header">
      <h2>Tailor Your Resume</h2>
      <button class="modal-close"
              aria-label="Close modal">&times;</button>
    </div>
    <div class="modal-body">
      <div class="tailor-section">
        <label for="tailor-job-desc">
          Job Description
        </label>
        <textarea id="tailor-job-desc" rows="8"
                  readonly></textarea>
      </div>
      <div class="tailor-section">
        <label for="tailor-resume">
          Your Resume (paste or edit below)
        </label>
        <textarea id="tailor-resume" rows="10"
            placeholder="Paste your resume here..."
        ></textarea>
      </div>
      <button id="tailor-generate-btn"
              class="btn btn-primary btn-lg">
        Generate Suggestions
      </button>
      <div id="tailor-loading" class="loading-indicator"
           hidden>
        <div class="spinner"></div>
        <p>Analyzing your resume against the job
           description... This may take 10-20 seconds.</p>
      </div>
      <div id="tailor-results" class="tailor-results"
           hidden>
        <h3>Suggestions</h3>
        <div id="tailor-suggestions"></div>
      </div>
      <div id="tailor-error" class="tailor-error"
           hidden></div>
    </div>
  </div>
</div>

Style the modal so it looks polished and professional. Add this CSS:

/* Resume Tailor Modal Styles */
.modal-overlay {
  position: fixed;
  top: 0; left: 0; right: 0; bottom: 0;
  background: rgba(0, 0, 0, 0.6);
  display: flex;
  align-items: center;
  justify-content: center;
  z-index: 1000;
  padding: 1rem;
}
.modal-overlay[hidden] { display: none; }

.modal-content {
  background: var(--bg-primary);
  border-radius: 12px;
  max-width: 700px;
  width: 100%;
  max-height: 90vh;
  overflow-y: auto;
  box-shadow: 0 20px 60px rgba(0, 0, 0, 0.3);
}

.modal-header {
  display: flex;
  justify-content: space-between;
  align-items: center;
  padding: 1.5rem;
  border-bottom: 1px solid var(--border-color);
}
.modal-header h2 { margin: 0; }
.modal-close {
  background: none; border: none;
  font-size: 1.5rem; cursor: pointer;
  color: var(--text-muted);
}

.modal-body { padding: 1.5rem; }

.tailor-section { margin-bottom: 1.5rem; }
.tailor-section label {
  display: block;
  font-weight: 600;
  margin-bottom: 0.5rem;
}
.tailor-section textarea {
  width: 100%;
  padding: 0.75rem;
  border: 1px solid var(--border-color);
  border-radius: 8px;
  font-family: inherit;
  font-size: 0.9rem;
  resize: vertical;
  background: var(--bg-secondary);
  color: var(--text-primary);
}

.loading-indicator {
  text-align: center;
  padding: 2rem 0;
}
.spinner {
  width: 40px; height: 40px;
  border: 4px solid var(--border-color);
  border-top-color: var(--accent);
  border-radius: 50%;
  animation: spin 0.8s linear infinite;
  margin: 0 auto 1rem;
}
@keyframes spin {
  to { transform: rotate(360deg); }
}

.tailor-results {
  margin-top: 1.5rem;
  padding: 1.5rem;
  background: var(--bg-secondary);
  border-radius: 8px;
  border-left: 4px solid var(--accent);
}

.tailor-error {
  margin-top: 1rem;
  padding: 1rem;
  background: #fee;
  color: #c00;
  border-radius: 8px;
}

The JavaScript

Now wire everything together. This JavaScript handles opening the modal, loading the saved resume from LocalStorage, making the API call, and displaying results:

// Resume Tailor functionality
(function() {
  const modal =
    document.getElementById('tailor-modal');
  const jobDescField =
    document.getElementById('tailor-job-desc');
  const resumeField =
    document.getElementById('tailor-resume');
  const generateBtn =
    document.getElementById('tailor-generate-btn');
  const loadingEl =
    document.getElementById('tailor-loading');
  const resultsEl =
    document.getElementById('tailor-results');
  const suggestionsEl =
    document.getElementById('tailor-suggestions');
  const errorEl =
    document.getElementById('tailor-error');

  // Load saved resume from LocalStorage
  const STORAGE_KEY = 'resumator-saved-resume';

  function loadSavedResume() {
    const saved = localStorage.getItem(STORAGE_KEY);
    if (saved) {
      resumeField.value = saved;
    }
  }

  function saveResume() {
    localStorage.setItem(
      STORAGE_KEY, resumeField.value);
  }

  // Open modal when "Tailor Resume" button is clicked
  document.addEventListener('click', function(e) {
    const btn = e.target.closest('.btn-tailor-resume');
    if (!btn) return;

    const jobDesc =
      btn.getAttribute('data-job-description');
    jobDescField.value = jobDesc || '';
    loadSavedResume();

    // Reset previous state
    resultsEl.hidden = true;
    errorEl.hidden = true;
    loadingEl.hidden = true;
    generateBtn.disabled = false;

    modal.hidden = false;
  });

  // Close modal
  modal.addEventListener('click', function(e) {
    if (e.target === modal
        || e.target.closest('.modal-close')) {
      modal.hidden = true;
    }
  });

  // Close on Escape key
  document.addEventListener('keydown', function(e) {
    if (e.key === 'Escape' && !modal.hidden) {
      modal.hidden = true;
    }
  });

  // Generate suggestions
  generateBtn.addEventListener('click', async () => {
    const jobDescription = jobDescField.value.trim();
    const resume = resumeField.value.trim();

    if (!resume) {
      errorEl.textContent =
        'Please paste your resume first.';
      errorEl.hidden = false;
      return;
    }

    // Save resume for next time
    saveResume();

    // Show loading, hide previous results
    generateBtn.disabled = true;
    loadingEl.hidden = false;
    resultsEl.hidden = true;
    errorEl.hidden = true;

    try {
      const response = await fetch('/api/tailor', {
        method: 'POST',
        headers: {
          'Content-Type': 'application/json'
        },
        body: JSON.stringify({
          jobDescription, resume
        })
      });

      const data = await response.json();

      if (!response.ok) {
        throw new Error(
          data.error || 'Something went wrong');
      }

      // Display suggestions
      suggestionsEl.innerHTML =
        formatSuggestions(data.suggestions);
      resultsEl.hidden = false;

    } catch (err) {
      errorEl.textContent =
        err.message
        || 'Failed to generate suggestions.';
      errorEl.hidden = false;
    } finally {
      generateBtn.disabled = false;
      loadingEl.hidden = true;
    }
  });

  function formatSuggestions(text) {
    // Convert plain text suggestions to HTML
    return text
      .split('\n')
      .filter(line => line.trim())
      .map(line => '<p>' + line + '</p>')
      .join('');
  }
})();

There are several important details in this code worth understanding. Each one represents a pattern you will use again and again in frontend development:

5. Prompt Engineering

Your feature works now. But the quality of the suggestions depends almost entirely on one thing: the prompt. The code you wrote in Part 3 sends a prompt to the LLM, and the LLM generates its response based on what you asked for and how you asked for it. A vague prompt produces vague suggestions. A specific, well-structured prompt produces specific, actionable suggestions.

This is not a minor detail. Prompt engineering — the practice of crafting prompts that reliably produce high-quality outputs from LLMs — is a real and increasingly valuable skill in software development. Companies are hiring prompt engineers. Developers who understand how to communicate effectively with AI models build better products. This section teaches you the fundamentals.

Start with a Basic Prompt

Imagine you sent the simplest possible prompt to an LLM:

Here is a job description and a resume.
Give me suggestions.

Job: [job description]
Resume: [resume]

The results from this prompt will be mediocre. The model does not know what kind of suggestions you want. It might rewrite your entire resume. It might give one generic sentence. It might focus on formatting instead of content. It might suggest skills you do not have. The output is unpredictable because the input was unstructured.

Iterate: Add Structure

Now compare that to a more structured prompt. Each addition improves the output:

You are a career coach specializing in tech resumes.

TASK: Analyze the job description and resume below.
Produce exactly 6 tailoring suggestions.

FORMAT each suggestion as:
  [Number]. [SECTION: which resume section to change]
  Current: [what the resume currently says]
  Suggested: [what it should say instead]
  Why: [how this matches the job description]

RULES:
- Only suggest changes to EXISTING content
- Never add skills or experience the candidate
  does not have
- Focus on keyword alignment with the job posting
- Prioritize the top half of the resume (recruiters
  read top-down)
- Use action verbs that match the job posting's
  language

=== JOB DESCRIPTION ===
{jobDescription}

=== RESUME ===
{resume}

Provide your 6 suggestions now:

This prompt is dramatically better. Here is why each change matters:

Advanced: Add Examples

The most powerful prompting technique is few-shot prompting — providing one or two examples of what good output looks like. When the model sees an example, it mimics the style, depth, and format:

EXAMPLE of a good suggestion:

1. [SECTION: Professional Summary]
   Current: "Software developer with 3 years of
   experience building web applications."
   Suggested: "Full-stack developer with 3 years of
   experience building scalable REST APIs and React
   front-ends in agile, cross-functional teams."
   Why: The job posting mentions "REST APIs" 4 times,
   "React" 3 times, and "cross-functional" twice.
   The current summary misses all of these keywords
   that the ATS will scan for.

Adding even a single example like this dramatically improves the output quality because the model now understands exactly what depth and specificity you expect.

Prompt engineering is iterative. You will not get a perfect prompt on the first try. Run it, read the results, identify what is missing or wrong, adjust the prompt, and run it again. This iteration cycle is exactly how professional prompt engineers work. Expect to go through 5 to 10 iterations before the output is consistently good. Keep a text file of your prompt versions so you can track what changed and why. This is prompt engineering's equivalent of version control.

Testing Your Prompts

Before integrating your improved prompt into the Java code, test it directly in the LLM provider's playground or web interface. Every provider has one: OpenAI has the ChatGPT interface, Anthropic has the Claude console, Google has AI Studio. Copy your prompt, paste in a real job description and a real resume (use your own or create a sample), and evaluate the results. Is each suggestion specific and actionable? Does the format match what you asked for? Are there any hallucinated skills? Iterate in the playground where feedback is instant, then transfer the final prompt to your code.

A common mistake is to test with only one job description and declare the prompt "done." Test with at least three or four very different roles: a backend Java position, a frontend React role, a DevOps job, and a management position. A good prompt produces useful, differentiated suggestions for all of them. A weak prompt produces the same generic advice regardless of the job.

Update your buildPrompt method in ResumeTailorService.java with your improved prompt. The beauty of this architecture is that improving the prompt requires zero changes to the rest of your code — the controller, the API call, and the frontend all stay the same. You are only changing the instructions, not the plumbing.

6. Responsible AI Use

You have built a powerful tool. Before you use it in a real job search, there are things you need to understand about using AI in the application process. This is not a lecture about ethics for the sake of it — this is practical advice that will affect whether you get hired.

AI Suggestions Are a Starting Point

The suggestions your tool generates are exactly that: suggestions. They are not a finished product. The LLM does not know the full context of your career, the nuances of your experience, or the specific culture of the company you are applying to. It is pattern-matching based on text. It is very good at identifying keyword gaps and suggesting rephrasing, but it cannot replace your judgment about what is accurate and what represents you authentically.

Treat every suggestion as a starting point. Read each one carefully. Ask yourself: "Is this accurate? Does this represent my actual experience? Would I be comfortable explaining this in an interview?" If the answer to any of these is no, modify or discard the suggestion.

Never Submit an Unread Resume

This should be obvious, but it happens more than you would think: never submit a resume you have not read word-for-word after making AI-suggested changes. LLMs can introduce subtle inaccuracies. They might rephrase something in a way that overstates your role. They might use industry jargon that you do not actually know. They might describe a technology in a way that sounds impressive but is technically wrong. Any of these will be immediately apparent to a knowledgeable interviewer, and the result is worse than if you had sent the original resume.

Do Not Fabricate Experience

There is a difference between rephrasing your experience to highlight relevant keywords and inventing experience you do not have. The line is clear: if you cannot talk about it confidently in an interview, it should not be on your resume.

Your prompt already includes the constraint "never suggest fabricating experience," but the model does not always follow instructions perfectly. It might suggest adding a skill you have barely touched, or rewording a bullet point in a way that implies more responsibility than you had. You are the final filter. Use your judgment.

Hiring Managers Can Detect AI-Generated Resumes

Here is the practical reality: hiring managers are getting very good at spotting resumes that were entirely written by AI. The tells are obvious once you know what to look for:

This is why your tool generates suggestions rather than a complete rewritten resume. The goal is to keep your authentic voice and real experience while optimizing for keyword alignment and relevance. A resume that is clearly yours but strategically tailored will always outperform one that reads like it was generated by ChatGPT.

The bottom line: Use AI to work smarter, not to pretend you are someone you are not. A tailored resume that accurately represents your experience, written in your voice, with strategic keyword placement — that is the sweet spot. That is what this tool helps you build.

One final thought on this topic. The skills you are building here — integrating APIs, building services, designing user interfaces, engineering prompts — are the real resume items. When an interviewer sees "Built an AI-powered resume tailoring feature using Spring Boot and OpenAI's API" on your resume, they are going to want to hear about it. And because you actually built it, you can talk about every decision, every tradeoff, and every challenge. That conversation is worth more than any amount of keyword optimization.

Knowledge Check

1. Why is sending the same generic resume to every job application a problem?

Correct! ATS software scores your resume against the job description by matching keywords, and recruiters only spend a few seconds scanning what makes it through. A generic resume misses the specific language and keywords each job posting uses, meaning it either gets filtered out by the ATS or fails to catch the recruiter's eye during their brief scan. Tailoring your resume means emphasizing the parts of your real experience that match what each specific role is looking for.

2. What is the most impactful way to improve the quality of suggestions from an LLM API?

Exactly right! Prompt engineering — crafting a clear prompt with a defined role, specific constraints, output format, and examples — has the greatest impact on output quality. A well-engineered prompt on a smaller model often outperforms a vague prompt on a larger one. Adding structure (numbered suggestions with sections), constraints (never fabricate experience), and examples (few-shot prompting) transforms mediocre output into specific, actionable suggestions.

3. Why does the Resume Tailor feature generate suggestions rather than a fully rewritten resume?

That's right! AI-generated resumes have telltale patterns that hiring managers increasingly recognize: generic phrasing, buzzword overload, and unnaturally uniform structure. More importantly, the LLM does not know the full context of your career and may introduce subtle inaccuracies. By generating suggestions rather than a rewrite, you stay in control. You decide which suggestions to adopt, you maintain your authentic voice, and you can confidently discuss everything on your resume in an interview.

Side Quest Complete

You have built a genuinely useful feature — one that solves a real problem in the job search process. Let us recap what you accomplished:

This feature makes your Resumator significantly more valuable as a real job search tool. More importantly, you now understand how to integrate LLM APIs into any application — a skill that is in high demand across the entire software industry.

Think about what else you could build with this same pattern. Interview preparation questions generated from a job description. Cover letter drafts. Salary negotiation talking points based on the role and your experience. The LLM API is a general-purpose tool, and now that you know how to use it, the possibilities are limited only by your prompt engineering skills and your imagination.

Finished this side quest?

← Back to Side Quests