If you are about to ship a v1.0 iOS app with subscriptions, In-App Purchases, a Watch companion, widgets, or any kind of health-adjacent positioning, the App Review process is going to be harder than the documentation suggests. This post is a write-up of one launch cycle (four rejections across five days, five reviewers, a single binary) and the patterns that made the cycle survivable. I am sharing it because the experience is one of the least-documented parts of shipping a serious indie app, and the documented version (WWDC sessions, the Review Guidelines page, the developer forums) does not match the day-to-day reality.

If you are mid-cycle and looking for tactical fixes, jump straight to “What to fix before you submit” and “Patterns worth understanding”. The personal timeline is in the second half if you want context.

# What to fix before you submit

These are the items that came up across four rejections. If you fix them upfront, you eliminate the bulk of the rejection surface for a sensitive-category v1.0 with subscriptions.

# Pricing display hierarchy on subscription paywalls

Apple’s Human Interface Guidelines for subscriptions are unambiguous about pricing presentation. The billed amount must be the most prominent pricing element. Marketing instinct (make the discount feel like the win) is the wrong instinct here. Compliance instinct (make the billed amount the typographic hero) is the right one.

What this means in practice: the regular price should be the largest, heaviest element. The introductory or discount price should be smaller, with explicit “First period:” labelling. Avoid percentage badges in small markers (use “LAUNCH OFFER” not “25% OFF”). The 3.1.2© Payments and Subscriptions enforcement on this point is consistent across reviewers.

# Explicit data deletion path

Even when your “account” is just an opt-in anonymous identifier, Apple’s 5.1.1(v) requires a labelled, visible, immediate deletion path. Toggle-off-as-implicit-deletion reads as compliant from the developer side but does not satisfy current enforcement.

Bake an explicit “Delete My Data” button into the privacy or settings screen from the first build. Add a confirmation alert. Make sure the surrounding copy describes the actual deletion behaviour rather than a placeholder timeline you intend to revise later.

# Audit Info.plist for vestigial declarations

Background modes, background fetch, location entitlements, HealthKit declarations, and similar Info.plist entries can sit unused for months while the codebase iterates. They get flagged when they do not match an actual feature. The 2.5.4 Software Requirements enforcement here is mechanical and unambiguous.

Specifically: any UIBackgroundModes entry needs to correspond to a feature that actually uses it. If your app declares “location” as a background mode but you never use continuous background location and have allowsBackgroundLocationUpdates = false, remove the declaration. Region monitoring does not require it. The audit takes ten minutes and prevents one rejection cycle.

The Privacy Policy field in App Store Connect is well-known. The Terms of Use requirement is less well-known. If you are using the standard Apple EULA, include a link to the Terms of Use in the App Description. If you are using a custom EULA, add it in the App Store Connect custom EULA field.

The terms page itself should disclose all paid products with subscription length, price, and renewal mechanics. Most teams have these elements scattered across separate pages or buried in the privacy policy. Consolidate them into one terms page that the reviewer can read in two minutes.

# App Preview videos: use raw screen captures

3D phone mockups, device frames, and stylised marketing compositions are reviewer-discretionary calls under 2.3.4 Accurate Metadata. The public guidance page does not explicitly forbid device frames, but the internal training material reviewers operate from appears to. Plain native-resolution screen captures are unambiguously safe.

Save the marketing chrome for your own website, social media, and Reddit. Put screen captures in the App Preview slot. The conversion difference between “polished mockup preview” and “raw screen recording preview” is small. The risk reduction is large.

# Explicit IAP navigation in App Review Notes

Reviewers operate on time budgets of 5 to 15 minutes per app. They will not always find every IAP attached to the version. If your IAPs are reachable through different paths in the app (separate paywalls, separate cards, deeper navigation), include click-by-click navigation breadcrumbs for every IAP in your App Review Notes.

Format that works:

Premium subscriptions and Lifetime IAP: Settings tab > Premium card > Get Premium button > Premium upgrade sheet Pro subscriptions and Lifetime IAP: Settings tab > Pro card > Get Pro button Consumable Tip Jar items: Settings tab > scroll past tier cards > Tip Jar card > Show Tip Options

This sounds excessive. It prevents one specific rejection cycle (Guideline 2.1(b) Information Needed) that otherwise costs you a day.

# Calendar buffer between submission and committed launch date

For a v1.0 submission with subscriptions in a sensitive category, plan for at least seven calendar days between first submission and the launch date you have committed externally. Each rejection cycle runs roughly 24 hours when you respond promptly. Four rejection cycles is a realistic worst case for sensitive-category v1.0 submissions.

If your launch date is committed externally (Pre-Order configured, In-App Events scheduled, marketing burst booked, journalists pitched), seven days of buffer is the minimum. Ten days is more comfortable. Anything less than seven and you risk missing the date over a single procedural rejection.

# Patterns worth understanding

A few observations from inside the cycle that are not obvious from the documentation.

# Each rejection finds different things

Four rejections came back flagging non-overlapping items. Every reviewer had access to every screen the prior reviewer had cleared. The first reviewer flagged a metadata EULA link. The second flagged three different items in the same binary (background mode, pricing display, deletion button). The third flagged a marketing asset. The fourth asked a navigation question.

This is not a bug. It is a feature of how App Review is structured. Reviews appear to be per-issue, not per-app. Reviewers stop at the first major issue they catch within their time budget. They are not required to do exhaustive audits, and the system does not give a “previously cleared” status to surfaces a prior reviewer accepted. The structure favours institutional throughput over developer experience.

The implication: you cannot get a comprehensive audit out of any single review pass. You can only get whatever the current reviewer surfaces in their time window, and you have to accept that the next reviewer can independently surface anything else in the next pass.

# Each rejection tends to be smaller in scope than the last

A loose pattern, not a guarantee. The first rejection is often substantive (a missing requirement, an architectural issue). The second is still substantive but more checklist-shaped. The third drifts toward peripheral metadata. The fourth is sometimes a procedural question rather than a rejection at all.

The reason is not that the app is “getting better” with each pass. The reason is that the reviewable surface area in the binary and metadata shrinks with each pass as previously-flagged items are fixed and previously-cleared items are no longer in scope. Reviewers are independently exhausting the available checklist surface, which means later reviews have less to find.

This pattern is reassuring while you are in the cycle, but do not bank on it. A late-cycle reviewer can still surface a substantive item that earlier reviewers missed.

# Reviewer thoroughness varies dramatically

Across four reviews of the same binary, the variance in what each reviewer found was substantial. One found three items. Another found one peripheral item. A third asked a question that was answerable by tapping a card in the second tab of the app.

You have no influence over which reviewer receives your submission on any given pass. Plan for the variance. Do not assume the next pass will be more thorough or more lenient than the last. They are independent samples from a wide distribution.

# Beta App Review approval does not predict full App Review approval

The two pipelines have different teams and different criteria. A build that passes Beta with the same features that get flagged in production review is not a contradiction. It is two different review processes.

If you are using TestFlight to confirm reviewer-readiness for production, you are using it for the wrong signal. TestFlight catches build-validity and basic compliance issues. Production App Review catches the full enforcement surface. They are not interchangeable.

# Sensitive-category enforcement vectors stack

Subscriptions, especially with introductory pricing or trial offers, trigger 3.1.2 enforcement automatically. Health-adjacent categories (alcohol, fitness, mental health, sleep) trigger reviewer attention to 1.4.1 medical-claim concerns. Privacy-sensitive features (location, HealthKit, anonymous identifiers) trigger 5.1.1 scrutiny. First-version v1.0 submissions trigger comprehensive reviews. In-App Events tied to launch dates trigger metadata scrutiny.

Each vector individually is manageable. They stack multiplicatively. A serious launch in a sensitive category with subscriptions, multiple platforms, and an In-App Event has every vector active simultaneously. A free single-screen utility with no IAPs gets a 2-minute pass. The difficulty of your review experience is correlated with the seriousness and breadth of your launch, not with the quality of your app.

# Documentation and enforcement do not always match perfectly

A rejection can cite guidance that does not appear explicitly on the linked public page. Reviewers operate from internal training material that overlaps with but is not identical to the public Review Guidelines. If you find a rejection that does not match the public docs, you can push back politely. Sometimes that gets reversed, sometimes it does not.

When you push back, do it in writing in the Resolution Center, in a structured paragraph that quotes the public guidance and asks for the specific clause being applied. Avoid frustration in the language. The reviewer is reading the response on a time budget; the response that gets read carefully is the one that respects that.

# How to respond to rejections constructively

A few practical notes on the Resolution Center back-and-forth.

# Reply same-day if you can

The fastest way out of the cycle is to keep the cycle moving. Each rejection cycle clock resets when you respond. Same-day responses produce same-week resolution; multi-day responses extend the cycle proportionally.

This requires that your team is structured to ship fixes quickly. For an indie developer, this often means clearing your calendar for the days following submission. The submission window cannot be treated as background work.

# Bundle your fixes into one resubmission

If a rejection flags three items, fix all three before resubmitting. Do not send a “fixed two of three, working on the third” reply. The next reviewer will surface different items entirely; the partial-fix reply just adds another cycle to the queue.

# Include screen recordings of fix verification

For any non-trivial fix, attach a 30-second screen recording showing the fix in action. The deletion flow, the corrected pricing display, the new navigation path. The reviewer is more likely to clear the issue on the next pass if they can see the fix without having to navigate to it themselves.

# Ask for comprehensive review in your reply

A polite request that any remaining concerns be raised together in the current pass, rather than spread across additional cycles, is reasonable. It does not always work, but it occasionally does. The phrasing matters: “We have addressed every issue raised so far promptly and in good faith. We would appreciate a comprehensive review against all applicable guidelines in this pass to minimise further cycles.”

This puts the reviewer in a position where their next response either clears the app or surfaces every remaining concern. Both outcomes are better than another single-issue rejection.

# Treat each pass as independent

The temptation is to assume the next reviewer has read the prior reviewer’s notes. They probably have not. Each Resolution Center reply should be self-contained, summarising the state of the submission for someone who has not seen the prior conversation.

This means a small amount of repetition between cycles is necessary. The detailed App Review Notes that worked for the first reviewer should be re-attached or restated for the third reviewer. Do not assume institutional memory.

# The personal timeline

For context, here is what the actual five days looked like.

I submitted build 22 on Saturday afternoon. The submission included 4 auto-renewing subscriptions, 2 non-consumable lifetime IAPs, 5 consumable Tip Jar items, App Description, screenshots, App Preview videos, an In-App Event tied to launch week, and Pre-Order configured for the launch date in 175 countries.

I had spent the prior six weeks systematically eliminating every issue I could anticipate. App Review Notes were drafted with reviewer-friendly explanations of the parts of the app most likely to draw scrutiny. DSA trader info was approved a week prior. App Privacy nutrition labels were live. Beta App Review had cleared multiple builds. I felt ready.

Day 2, 5:15 AM: First rejection. Guideline 3.1.2©. Missing functional Terms of Use link in App Store metadata. Same-day fix shipped (added Terms link to the in-app upgrade sheet, expanded the terms page to disclose all paid products, updated the App Description). One day cycle.

Day 4, 4:03 PM: Second rejection. Three items in one message. Guideline 2.5.4 (vestigial UIBackgroundModes “location” entry that should not have been there). Guideline 3.1.2© (introductory pricing displayed more prominently than billed amount). Guideline 5.1.1(v) (anonymous identifier toggle-off needed an explicit labelled deletion button). Same-day fix shipped (removed the Info.plist entry, inverted the pricing hierarchy, added an explicit “Delete Anonymous Data” button with confirmation alert). One day cycle.

Day 5, 5:05 PM: Third rejection. Guideline 2.3.4 Accurate Metadata. The App Preview videos featured 3D phone mockups demonstrating the app’s screens. The reviewer’s note cited content that does not “sufficiently show the app in use” and specifically called out device frames.

The challenge: the public guidance page at developer.apple.com/app-store/app-previews/ does not explicitly forbid device frames. The closest documented rule is “Stay within the app” with examples about over-the-shoulder shots and physical interaction with devices. Neither applied.

With launch four days away and a four-day cumulative delay already incurred, I made the trade. Removed the App Preview videos to unblock the resubmission. Asked for reconsideration politely if there was a specific guideline clause I had missed. Sent a polite, structured paragraph asking for any remaining concerns to be raised together in this pass. The App Preview removal stood. The reconsideration request received no direct response.

Day 6, 7:23 PM: Fourth message. Guideline 2.1(b) Information Needed. Not a rejection, technically. A pause on the review to ask a question. The reviewer could not locate two of the IAPs attached to the version and asked where to find them.

The screenshot they attached showed the first paywall correctly. The second paywall was on the same Settings screen, accessible by tapping a card directly below the first card the reviewer had successfully navigated to. I replied with explicit step-by-step navigation for all 11 IAPs, sent within an hour.

Day 7, 8:30 PM: Approval. Build cleared. Pre-Order activated. Launch week is set for May 11.

The five days were intense. None of them were existential, in retrospect, though it did not feel that way mid-cycle. The cumulative delay almost cost the launch date but did not. The actual launch product is exactly what was built before submission. Nothing was cut, modified, or deferred to clear review.

# What the cycle does not flag is informative

Across four reviews, the surfaces I had spent the most time worrying about were never flagged.

The habit-score feature (a 0-100 score with helpers and harmers across six weighted pillars). The medication tracking surface (logging only, with no advisory output). The privacy model (no accounts, on-device data, anonymous opt-in sharing with explicit deletion). HealthKit writes. None of these surfaces, the parts of the app most likely to trigger health-claim or privacy concerns, were flagged in any of the four reviews. Four independent reviewers all looked, all chose not to flag them.

The implication for sensitive-category builders: the proactive disclosure work matters. The App Review Notes that explain methodology, the in-app disclaimers, the careful naming choices, the conservative framing in the App Description. None of it is glamorous. All of it contributed to four reviewers in a row choosing not to flag the most arguable surfaces. The 18+ age rating, the disclaimer sheet gate at first launch, the explicit “we do not provide medical advice” copy in the relevant areas. All of it worked.

What got flagged instead was the procedural and mechanical surface area. Pricing display hierarchy. Background mode declarations. Metadata link presence. Marketing asset compositions. IAP discoverability. The substantive parts of the app cleared every pass.

# Final notes for indie developers

A few observations that did not fit cleanly above.

The cheap apps you see in the App Store are not getting more lenient treatment. They were approved years ago when standards were lower, or they do not trigger the scrutiny vectors your serious launch does. The system rewards low-effort low-risk apps with low scrutiny. Your effort and care are part of why your review is harder. It is not a comment on the quality of your app.

The system is not staffed to give you the experience you want. App Review is a contractor pool. Reviewers handle dozens of apps per day at minutes per app. They are trained on the Review Guidelines as a checklist, not on the specific UX of any individual app. The empathy gap is structural, not personal.

Documentation of indie experiences eventually moves things. Apple has improved App Review meaningfully over the last decade. Some of that improvement traces to indie developers consistently documenting their experiences and aggregate pressure building. Your individual rejection will not get any specific reviewer retrained. The structured aggregate of indie experiences does eventually move the institutional surface.

You will probably ship the product you built. Every feature I worried might get cut shipped intact. The four-rejection cycle felt existential while it was happening. The actual outcome converged on approval as long as I kept responding clearly and kept the launch deadline buffer in my back pocket.

If you are mid-cycle right now, pulling all-nighters in the Resolution Center: the launch happens. Keep responding. The system is not personal. The product is yours.


This post documents the launch of AlcoLog, an iOS drink tracking app. AlcoLog launches on the App Store on May 11, 2026, after the cycle described above. The app is free with optional Premium tiers and is available for pre-order now in 175 countries.

If you are an iOS developer and want to compare notes on App Review experiences, the indie-iOS community on r/iOSProgramming and Indie Hackers are good places to start. The more specifically each of us documents what we encountered, the more useful the aggregate becomes for the next person.