-
-
Notifications
You must be signed in to change notification settings - Fork 46
.pr_agent_accepted_suggestions
| PR 669 (2025-10-28) |
[possible issue] Prevent null reference and improve version parsing
✅ Prevent null reference and improve version parsing
Add a null check for the assembly reference to prevent a potential NullReferenceException and construct the version object directly from its components to avoid fragile string parsing.
SPTarkov.Server/Modding/ModValidator.cs [170-182]
-var sptCoreAsmRefVersion = assembly
+var sptCoreAsmRef = assembly
.GetReferencedAssemblies()
- .FirstOrDefault(asm => asm.Name == "SPTarkov.Server.Core")
- ?.Version?.ToString();
+ .FirstOrDefault(asm => asm.Name == "SPTarkov.Server.Core");
-var modRefVersion = new SemanticVersioning.Version(sptCoreAsmRefVersion?[..^2]!);
+if (sptCoreAsmRef?.Version is null)
+{
+ continue;
+}
+
+var modRefVersion = new SemanticVersioning.Version(
+ sptCoreAsmRef.Version.Major,
+ sptCoreAsmRef.Version.Minor,
+ sptCoreAsmRef.Version.Build
+);
+
if (modRefVersion > sptVersion)
{
throw new Exception(
$"Mod: {modName} requires a minimum SPT version of `{modRefVersion}`, but you are running `{sptVersion}`. Please update SPT to use this mod."
);
}Suggestion importance[1-10]: 8
__
Why: The suggestion correctly identifies a potential NullReferenceException that would crash the application, and also points out fragile string manipulation, providing a much more robust implementation.
| PR 661 (2025-10-26) |
[possible issue] Fix compilation error from incorrect operator
✅ Fix compilation error from incorrect operator
Remove the null-coalescing operator ?? 0 from the health calculation, as it is incorrectly applied to a non-nullable double and will cause a compilation error.
Libraries/SPTarkov.Server.Core/Helpers/HealthHelper.cs [148-151]
matchingProfilePart.Health.Current =
matchingProfilePart.Health.Maximum
- * HealthConfig.HealthMultipliers.Death
- ?? 0;
+ * HealthConfig.HealthMultipliers.Death;Suggestion importance[1-10]: 10
__
Why: The suggestion correctly identifies a compilation error caused by applying the null-coalescing operator ?? to a non-nullable double type, which is a critical issue introduced in the PR.
| PR 645 (2025-10-20) |
[possible issue] Move No-GC region logic to middleware
✅ Move No-GC region logic to middleware
Move the No-GC region management logic from HandleRequest into the RequestTrackingMiddleware to fix a race condition that can cause resource leaks. The middleware should start the region when the request count becomes 1 and end it when the count returns to 0.
SPTarkov.Server/Program.cs [158-199]
private static async Task HandleRequest(HttpContext context, RequestDelegate next)
{
- var config = context.RequestServices.GetRequiredService<ConfigServer>().GetConfig<CoreConfig>();
-
- // if no other requests are running, start the no GC region, otherwise dont start it
- if (!RequestTrackingMiddleware.OtherRequestsActive)
- {
- if (config.EnableNoGCRegions && GCSettings.LatencyMode != GCLatencyMode.NoGCRegion)
- {
- try
- {
- GC.TryStartNoGCRegion(
- 1024L * 1024L * 1024L * config.NoGCRegionMaxMemoryGB,
- 1024L * 1024L * 1024L * config.NoGCRegionMaxLOHMemoryGB,
- true
- );
- }
- catch (Exception)
- {
- // ignored, we keep going
- }
- }
- }
-
await context.RequestServices.GetRequiredService<HttpServer>().HandleRequest(context, next);
-
- // if no other requests are running, end the no GC region, otherwise dont stop it as other requests need it still
- if (!RequestTrackingMiddleware.OtherRequestsActive)
- {
- if (config.EnableNoGCRegions && GCSettings.LatencyMode == GCLatencyMode.NoGCRegion)
- {
- try
- {
- GC.EndNoGCRegion();
- }
- catch (Exception)
- {
- // ignored, we dont care about handling this
- }
- }
- }
}Suggestion importance[1-10]: 9
__
Why: This suggestion correctly identifies a critical race condition in the No-GC region management that could lead to resource leaks, and proposes a robust architectural solution by centralizing the logic in the middleware.
[learned best practice] Use specific exception types
✅ Use specific exception types
Throw ArgumentOutOfRangeException with parameter name and bounds to provide clearer diagnostics and align with .NET best practices.
Libraries/SPTarkov.Server.Core/Models/Spt/Config/CoreConfig.cs [58-61]
if (value <= 0)
{
- throw new Exception($"Invalid value {nameof(NoGCRegionMaxMemoryGB)}: {value}. Must be greater than zero.");
+ throw new ArgumentOutOfRangeException(nameof(NoGCRegionMaxMemoryGB), value, "Must be greater than zero.");
}Suggestion importance[1-10]: 6
__
Why: Relevant best practice - Guard high-risk state mutations with robust validation and avoid throwing generic exceptions from config setters; prefer specific exceptions and validation methods.
| PR 639 (2025-10-19) |
[general] Simplify complex conditional logic
✅ Simplify complex conditional logic
Simplify the complex conditional statement for removing an item by using the null-conditional (?.) and null-coalescing (??) operators. This will make the code more concise and readable.
Libraries/SPTarkov.Server.Core/Utils/Collections/ProbabilityObjectArray.cs [229]
-if (neverRemoveWhitelist is null || neverRemoveWhitelist is not null && !neverRemoveWhitelist.Contains(chosenItem.Key))
+if (!(neverRemoveWhitelist?.Contains(chosenItem.Key) ?? false))Suggestion importance[1-10]: 4
__
Why: The suggestion correctly identifies that the conditional logic is verbose and can be simplified. The proposed change improves code conciseness and readability by using more idiomatic C# operators.
| PR 636 (2025-10-18) |
[general] Preserve original exception details
✅ Preserve original exception details
Preserve the original exception's details by passing the caught JsonException as the inner exception when throwing a new Exception.
Libraries/SPTarkov.Server.Core/Servers/ConfigServer.cs [90-94]
catch (JsonException ex)
{
Logger.Error($"Config file: {file} failed to deserialize", ex);
- throw new Exception($"Server will not run until the: {file} config error mentioned above is fixed");
+ throw new Exception($"Server will not run until the: {file} config error mentioned above is fixed", ex);
}Suggestion importance[1-10]: 7
__
Why: The suggestion correctly advises to include the original JsonException as an inner exception, which is a best practice for preserving stack trace information and simplifying debugging.
[possible issue] Correct a misleading exception message
✅ Correct a misleading exception message
Correct the misleading exception message from "mentioned below" to "mentioned above" to accurately reflect the position of the corresponding error log in the console output.
Libraries/SPTarkov.Server.Core/Servers/ConfigServer.cs [96-100]
if (deserializedContent == null)
{
Logger.Error($"Config file: {file} is corrupt. Use a site like: https://jsonlint.com to find the issue.");
- throw new Exception($"Server will not run until the: {file} config error mentioned below is fixed");
+ throw new Exception($"Server will not run until the: {file} config error mentioned above is fixed");
}Suggestion importance[1-10]: 5
__
Why: The suggestion correctly identifies and fixes a misleading error message, improving user experience and clarity during debugging by pointing to the correct location of the logged error.
| PR 621 (2025-10-16) |
[general] Improve environment variable check flexibility
✅ Improve environment variable check flexibility
Make the DISABLE_VIRTUAL_TERMINAL environment variable check more flexible by also accepting "true" (case-insensitively) in addition to "1".
SPTarkov.Server/Program.cs [237]
-if (!OperatingSystem.IsWindows() || Environment.GetEnvironmentVariable("DISABLE_VIRTUAL_TERMINAL") == "1")
+var disableFlag = Environment.GetEnvironmentVariable("DISABLE_VIRTUAL_TERMINAL");
+if (!OperatingSystem.IsWindows() || disableFlag == "1" || string.Equals(disableFlag, "true", StringComparison.OrdinalIgnoreCase))Suggestion importance[1-10]: 5
__
Why: The suggestion improves the user-friendliness of the new environment variable check by making it more flexible, which is a good practice for configuration flags.
| PR 614 (2025-10-12) |
[possible issue] Implement deserialisation to prevent future crashes
✅ Implement deserialisation to prevent future crashes
Implement the Read method in the ToStringJsonConverter to handle deserialization, making the converter more robust and complete for both serialization and deserialization.
Libraries/SPTarkov.Server.Core/Utils/Json/Converters/ToStringJsonConverter.cs [8-11]
public override T Read(ref Utf8JsonReader reader, Type typeToConvert, JsonSerializerOptions options)
{
- throw new NotSupportedException($"Deserialization of {typeof(T).Name} from string is not supported.");
+ if (reader.TokenType == JsonTokenType.Null)
+ {
+ return default;
+ }
+
+ if (reader.TokenType == JsonTokenType.String)
+ {
+ var value = reader.GetString();
+ try
+ {
+ return (T)Activator.CreateInstance(typeToConvert, value);
+ }
+ catch (Exception ex)
+ {
+ throw new JsonException($"Unable to convert \"{value}\" to {typeof(T).Name}.", ex);
+ }
+ }
+
+ throw new JsonException($"Expected string to deserialize {typeof(T).Name} but got {reader.TokenType}.");
}Suggestion importance[1-10]: 6
__
Why: The suggestion correctly identifies that the Read method is not implemented and proposes a robust implementation, which improves the converter's completeness and reusability for future use cases.
| PR 596 (2025-09-29) |
[learned best practice] Validate and refresh mutable weights
✅ Validate and refresh mutable weights
Validate presence and non-negative weight before decrement, and clamp to zero to prevent key errors/negatives; if all weights are zero or dictionary is empty, refresh from config instead of clearing silently.
Libraries/SPTarkov.Server.Core/Generators/WeatherGenerator.cs [51-69]
-if (previousPreset.HasValue)
+if (previousPreset.HasValue && presetWeights.TryGetValue(previousPreset.Value, out var prevWeight))
{
- // We know last picked preset, Adjust weights
- // Make it less likely to be picked now
- presetWeights[previousPreset.Value] -= 1;
- logger.Info($"{previousPreset.Value} weight reduced by: 1 to: {presetWeights[previousPreset.Value]}");
+ var newWeight = Math.Max(0, prevWeight - 1);
+ presetWeights[previousPreset.Value] = newWeight;
+ logger.Info($"{previousPreset.Value} weight adjusted to: {newWeight}");
}
-// Assign value to previousPreset to be picked up next loop
-previousPreset = weightedRandomHelper.GetWeightedValue(presetWeights);
-logger.Warning($"Chose: {previousPreset}");
-
-// Check if chosen preset has been exhausted and reset if necessary
-if (presetWeights[previousPreset.Value] <= 0)
+if (presetWeights.Count == 0 || presetWeights.Values.All(w => w <= 0))
{
- logger.Info($"{previousPreset.Value} is 0, resetting weights");
- // Flag for fresh presets
- presetWeights.Clear();
+ logger.Info("Preset weights exhausted or empty, reloading from config");
+ presetWeights = cloner.Clone(GetWeatherPresetWeightsBySeason(currentSeason));
}
+var chosen = weightedRandomHelper.GetWeightedValue(presetWeights);
+previousPreset = chosen;
+logger.Info($"Chose: {chosen}");
+
+if (presetWeights.TryGetValue(chosen, out var chosenWeight) && chosenWeight <= 0)
+{
+ logger.Info($"{chosen} weight is 0, reloading weights");
+ presetWeights = cloner.Clone(GetWeatherPresetWeightsBySeason(currentSeason));
+}
+Suggestion importance[1-10]: 6
__
Why: Relevant best practice - Gate high-impact operations behind precise checks and avoid silent state corruption; validate and clamp mutable weighted state before and after mutation.
| PR 587 (2025-09-09) |
[general] Eliminate redundant dictionary lookup
✅ Eliminate redundant dictionary lookup
Use TryGetValue with the actual value parameter to avoid double dictionary lookup. This improves performance by eliminating the redundant _cachedItems[guid] access.
Libraries/SPTarkov.Server.Core/Services/Mod/ModItemCache.cs [53-59]
var guid = mod.ModMetadata.ModGuid;
-if (!_cachedItems.TryGetValue(guid, out _))
+if (!_cachedItems.TryGetValue(guid, out var items))
{
- _cachedItems.Add(guid, []);
+ items = [];
+ _cachedItems.Add(guid, items);
}
-_cachedItems[guid].Add(modId);
+items.Add(modId);Suggestion importance[1-10]: 5
__
Why: The suggestion correctly points out a double dictionary lookup and proposes a more performant pattern by using the out parameter of TryGetValue to avoid the second lookup.
| PR 586 (2025-09-08) |
[possible issue] Use a concurrent set for expired IDs
✅ Use a concurrent set for expired IDs
Using ConcurrentBag introduces duplicates and lacks efficient querying; .Contains is O(n) and not thread-safe for logical set semantics. Replace it with a concurrent set (e.g., ConcurrentDictionary<MongoId, byte>) to ensure uniqueness, efficient checks, and safe concurrent adds, and remove the coarse lock where unnecessary. Also avoid calling .Clear() while other threads may iterate; protect with the same lock or swap the collection atomically.
Libraries/SPTarkov.Server.Core/Utils/RagfairOfferHolder.cs [24-414]
-private readonly ConcurrentBag<MongoId> _expiredOfferIds = [];
-
-...
+// Use a concurrent set via dictionary to ensure uniqueness and efficient Contains
+private ConcurrentDictionary<MongoId, byte> _expiredOfferIds = new();
public List<MongoId> GetStaleOfferIds()
{
lock (_processExpiredOffersLock)
{
- return _expiredOfferIds.ToList();
+ return _expiredOfferIds.Keys.ToList();
}
}
-...
-
public void FlagOfferAsExpired(MongoId staleOfferId)
{
- lock (_processExpiredOffersLock)
- {
- _expiredOfferIds.Add(staleOfferId);
- }
+ // idempotent add; no external lock needed
+ _expiredOfferIds.TryAdd(staleOfferId, 0);
}
-
-...
public void ResetExpiredOfferIds()
{
lock (_processExpiredOffersLock)
{
- _expiredOfferIds.Clear();
+ // swap to avoid racing with iterations
+ _expiredOfferIds = new ConcurrentDictionary<MongoId, byte>();
}
}
-...
-
public void FlagExpiredOffersAfterDate(long timestamp)
{
- lock (_processExpiredOffersLock)
- {
- var offers = GetOffers();
- Parallel.ForEach(
- offers,
- offer =>
+ var offers = GetOffers();
+ Parallel.ForEach(
+ offers,
+ offer =>
+ {
+ if (offer.IsTraderOffer() || !offer.IsStale(timestamp))
{
- if (_expiredOfferIds.Contains(offer.Id) || offer.IsTraderOffer())
- {
- // Already flagged or trader offer (handled separately), skip
- return;
- }
+ return;
+ }
- if (!offer.IsStale(timestamp))
- {
- return;
- }
-
- _expiredOfferIds.Add(offer.Id);
- }
- );
- }
+ _expiredOfferIds.TryAdd(offer.Id, 0);
+ }
+ );
}Suggestion importance[1-10]: 9
__
Why: The suggestion correctly identifies that using ConcurrentBag introduces inefficiency (O(n) Contains) and allows duplicate entries, which was not possible with the original HashSet. The proposed ConcurrentDictionary is a superior alternative for a thread-safe set, fixing a performance and correctness issue introduced in the PR.
[general] De-duplicate expired ID snapshot
✅ De-duplicate expired ID snapshot
Creating the copy under lock is fine, but with the current ConcurrentBag it may include duplicates, causing repeated processing. Ensure the snapshot is distinct to prevent double-processing and wasted work, especially when logging and building item lists.
Libraries/SPTarkov.Server.Core/Utils/RagfairOfferHolder.cs [344-367]
public IEnumerable<List<Item>> GetExpiredOfferItems()
{
List<MongoId> expiredOfferIdsCopy;
lock (_processExpiredOffersLock)
{
- expiredOfferIdsCopy = _expiredOfferIds.ToList();
+ expiredOfferIdsCopy = _expiredOfferIds
+ .Distinct()
+ .ToList();
}
- // list of lists of item+children
var expiredItems = new List<List<Item>>();
foreach (var expiredOfferId in expiredOfferIdsCopy)
{
var offer = GetOfferById(expiredOfferId);
if (offer is null)
{
logger.Warning($"Expired offerId: {expiredOfferId} not found, skipping");
continue;
}
...
}
}Suggestion importance[1-10]: 7
__
Why: The suggestion correctly points out that ConcurrentBag can contain duplicates, and processing them can lead to redundant work. Adding .Distinct() is a valid and good practice to prevent this, improving the robustness of the GetExpiredOfferItems method.
| PR 568 (2025-08-24) |
[general] Align JSON property casing
✅ Align JSON property casing
JSON property names are typically camelCase in this codebase; using "DialogueProgress" will create a mismatched payload and persistence issues. Align the JSON name with the request field ("nodePathTraveled") or a consistent camelCase key to ensure correct serialisation.
Libraries/SPTarkov.Server.Core/Models/Eft/Profile/SptProfile.cs [63-67]
/// <summary>
/// Stores the most recently sent dialog progress result from the client
/// </summary>
-[JsonPropertyName("DialogueProgress")]
+[JsonPropertyName("dialogueProgress")]
public List<NodePathTraveled>? DialogueProgress { get; set; }Suggestion importance[1-10]: 6
__
Why: The suggestion correctly points out an inconsistency in JSON property naming conventions, and aligning it to dialogueProgress improves consistency with other properties and client-side expectations.
| PR 560 (2025-08-19) |
[possible issue] Fix seconds-to-milliseconds conversion
Fix seconds-to-milliseconds conversion
The conversion from seconds to milliseconds is incorrect (uses 60 instead of 1000). This will produce values 60x too small, breaking timeouts and timers. Fix the multiplier to 1000 and guard against integer overflow.
Libraries/SPTarkov.Server.Core/Utils/TimeUtil.cs [160-163]
public int GetSecondsAsMilliseconds(int seconds)
{
- return seconds * 60 * 1000;
+ // Prevent overflow and ensure correct conversion
+ checked
+ {
+ return seconds * 1000;
+ }
}Suggestion importance[1-10]: 9
__
Why: The suggestion correctly identifies a critical bug in the GetSecondsAsMilliseconds method where the conversion logic is wrong, which would lead to incorrect time calculations.
| PR 558 (2025-08-19) |
[possible issue] Prevent null service access
✅ Prevent null service access
Avoid using the null-forgiving operator on a potentially null service to prevent a runtime NullReferenceException. Use GetRequiredService to fail fast with a clear error if the service is not registered.
SPTarkov.Server/Program.cs [89]
-await app.Services.GetService<SptServerStartupService>()!.Startup();
+await app.Services.GetRequiredService<SptServerStartupService>().Startup();Suggestion importance[1-10]: 6
__
Why: The suggestion correctly identifies the risk of using the null-forgiving operator with GetService and proposes using GetRequiredService, which improves robustness by failing fast with a clear error if the service is not registered.
[general] Remove obsolete commented code
✅ Remove obsolete commented code
If the new startup flow replaces the background service, ensure the old hosted service is removed entirely to avoid confusion and accidental re-enabling. Delete the commented registration line to keep DI configuration unambiguous.
SPTarkov.Server/Program.cs [67]
-// builder.Services.AddHostedService<SptServerBackgroundService>();
+/* removed obsolete hosted service registration */Suggestion importance[1-10]: 3
__
Why: This is a valid code cleanup suggestion that improves maintainability by removing commented-out, obsolete code, but its impact on functionality is minimal.
| PR 557 (2025-08-19) |
[possible issue] Restrict wipe to PMC raids
✅ Restrict wipe to PMC raids
This unconditionally wipes PMC inventory on raid start and ignores the side chosen for the raid. If the player starts as Scav, this will still wipe PMC gear. Gate the wipe to PMC raids only, and consider logging the action for traceability.
Libraries/SPTarkov.Server.Core/Services/LocationLifecycleService.cs [76-78]
-// If config enabled, remove players equipped items to prevent alt-F4 from persisting items
-if (_lostOnDeathConfig.WipeOnRaidStart)
+// If config enabled, remove equipped items to prevent alt-F4 from persisting items (PMC raids only)
+if (_lostOnDeathConfig.WipeOnRaidStart && string.Equals(request.PlayerSide, "pmc", StringComparison.OrdinalIgnoreCase))
+{
+ logger.Info("WipeOnRaidStart enabled - clearing PMC inventory before raid start");
inRaidHelper.DeleteInventory(playerProfile.CharacterData.PmcData, sessionId);
+}Suggestion importance[1-10]: 9
__
Why: The suggestion correctly identifies a critical bug where a player's PMC inventory would be wiped even when starting a Scav raid, and provides a correct fix.
[general] Use safe default configuration
✅ Use safe default configuration
Enabling this by default can surprise users by wiping gear at raid start. Default the flag to false and let server operators opt in via configuration.
Libraries/SPTarkov.Server.Assets/SPT_Data/configs/lostondeath.json [20-21]
"specialSlotItems": false,
-"wipeOnRaidStart": true
+"wipeOnRaidStart": falseSuggestion importance[1-10]: 8
__
Why: The suggestion correctly points out that enabling the destructive wipeOnRaidStart feature by default is risky and could lead to unexpected gear loss for users.
[general] Add error handling around wipe
✅ Add error handling around wipe
Deleting the entire inventory at raid start is high risk; a transient error or misconfiguration can cause irreversible loss. Wrap the wipe in a try/catch and add a warning log to capture failures without crashing the raid start flow.
Libraries/SPTarkov.Server.Core/Services/LocationLifecycleService.cs [76-78]
-// If config enabled, remove players equipped items to prevent alt-F4 from persisting items
-if (_lostOnDeathConfig.WipeOnRaidStart)
- inRaidHelper.DeleteInventory(playerProfile.CharacterData.PmcData, sessionId);
+// If config enabled, remove equipped items to prevent alt-F4 from persisting items
+if (_lostOnDeathConfig.WipeOnRaidStart && string.Equals(request.PlayerSide, "pmc", StringComparison.OrdinalIgnoreCase))
+{
+ try
+ {
+ logger.Info("WipeOnRaidStart enabled - clearing PMC inventory before raid start");
+ inRaidHelper.DeleteInventory(playerProfile.CharacterData.PmcData, sessionId);
+ }
+ catch (Exception ex)
+ {
+ logger.Error(ex, "Failed to clear PMC inventory on raid start; proceeding without wipe");
+ }
+}Suggestion importance[1-10]: 7
__
Why: The suggestion improves code robustness by adding a try-catch block to a critical operation, preventing potential raid start failures.
[general] Gate wipe to standard raid modes
✅ Gate wipe to standard raid modes
Avoid wiping when joining non-standard sessions (e.g., offline testing, co-op, or after reconnects). Check the raid context (e.g., request.IsLocal/request.AiAmount/match mode flags) and only wipe for standard PMC live raids to prevent unintended loss.
Libraries/SPTarkov.Server.Core/Services/LocationLifecycleService.cs [76-78]
// If config enabled, remove players equipped items to prevent alt-F4 from persisting items
-if (_lostOnDeathConfig.WipeOnRaidStart)
+if (_lostOnDeathConfig.WipeOnRaidStart
+ && string.Equals(request.PlayerSide, "pmc", StringComparison.OrdinalIgnoreCase)
+ && request.SavageLock == false
+ && request.InsuredEquipment == true)
+{
inRaidHelper.DeleteInventory(playerProfile.CharacterData.PmcData, sessionId);
+}Suggestion importance[1-10]: 9
__
Why: The suggestion correctly points out that wiping inventory should only occur in standard online PMC raids, preventing unintended data loss in other modes like offline or Scav runs, which is a significant oversight.
| PR 546 (2025-08-13) |
[possible issue] Always blacklist core inventory IDs
✅ Always blacklist core inventory IDs
The outer condition ties blacklist population to HideoutAreaStashes being non-empty. This can skip blacklisting critical root containers when HideoutAreaStashes is null/empty, causing ID collisions and broken parent references. Split the checks: always add core inventory IDs when pmcData.Inventory exists, and only union hideout stashes when available.
Libraries/SPTarkov.Server.Core/Helpers/ItemHelper.cs [998-1012]
-if (pmcData != null && pmcData.Inventory?.HideoutAreaStashes?.Values.Count != 0)
+if (pmcData?.Inventory != null)
{
itemIdBlacklist.UnionWith(
new List<MongoId>
{
- pmcData.Inventory!.Equipment!.Value,
+ pmcData.Inventory.Equipment!.Value,
pmcData.Inventory.QuestRaidItems!.Value,
pmcData.Inventory.QuestStashItems!.Value,
pmcData.Inventory.SortingTable!.Value,
pmcData.Inventory.Stash!.Value,
pmcData.Inventory.HideoutCustomizationStashId!.Value,
}
);
- itemIdBlacklist.UnionWith(pmcData.Inventory.HideoutAreaStashes?.Values!);
+ if (pmcData.Inventory.HideoutAreaStashes != null)
+ {
+ itemIdBlacklist.UnionWith(pmcData.Inventory.HideoutAreaStashes.Values);
+ }
}Suggestion importance[1-10]: 9
__
Why: The suggestion correctly identifies a critical logic flaw in the PR where blacklisting essential inventory items is wrongly tied to the existence of HideoutAreaStashes, which could lead to major item ID collision bugs.
| PR 545 (2025-08-12) |
[possible issue] Fail fast with exception
✅ Fail fast with exception
Avoid returning early on a critical state without surfacing the failure to callers. Throw a custom exception after logging so upstream logic can handle the failure explicitly. This prevents silent partial state and makes the error path consistent with other null-guard checks above.
Libraries/SPTarkov.Server.Core/Helpers/InRaidHelper.cs [72-76]
if (insured is null)
{
logger.Error("Cloned insured items are null when trying to set inventory post raid");
- return;
+ throw new InRaidHelperException("Cloned insured items are null when trying to set inventory post raid");
}Suggestion importance[1-10]: 7
__
Why: The suggestion correctly points out that returning silently can hide a critical failure state, and throwing an exception is consistent with other null checks in the method, improving robustness.