Part 3: BOUNDARY-NATIVE LANGUAGE MODELS
Section 13: Five-Layer BNLM Architecture
Pendry, S
Halfhuman Draft
2026
Previous Sections
Post Zero Link
Section 12: BNST as Architectural Solution
13.1 Architecture Overview
Boundary-Native Language Models implement BNST through five interconnected layers:
Layer 1: Universal Representation (U)
↓
Layer 2: Boundary Complement (-{})
↓
Layer 3: Russell Boundary Filtering (R)
↓
Layer 4: Validity Checking (Valid(·))
↓
Layer 5: Investigation Output
Each layer corresponds to a BNST axiom and performs specific validation function.
13.2 Layer 1: Universal Representation
Purpose: Generate complete possibility space of interpretations
BNST Foundation: Axiom 1 (Universal Set), Axiom 2 (Unrestricted Comprehension)
Implementation:
class UniversalRepresentationLayer:
def __init__(self, input_text, context):
self.input = input_text
self.context = context
def generate_universe(self):
"""
Generate all reasonable interpretations of input.
Don't optimize for 'most likely' include full space.
"""
interpretations = {
'literal': self.parse_literal(),
'metaphorical': self.parse_metaphorical(),
'technical': self.parse_technical(),
'contextual': self.parse_contextual(),
'interrogative': self.parse_as_question(),
'declarative': self.parse_as_statement(),
'ambiguous': self.identify_ambiguities()
}
return interpretations
def parse_literal(self):
"""Direct, face-value interpretation"""
return self.standard_semantic_parse()
def parse_metaphorical(self):
"""Figurative or analogical meanings"""
return self.identify_metaphors_and_analogies()
def parse_technical(self):
"""Domain-specific technical interpretation"""
return self.apply_domain_specific_parsers()
def parse_contextual(self):
"""Interpretation dependent on conversation context"""
return self.context_dependent_parse()
Key principle: Start with maximum interpretive possibility. Don’t prematurely narrow to “most likely.” This mirrors naive set theory’s unrestricted comprehension.
Example:
Input: “Just finished my workout, I felt some burn in my shoulders/arms?”
Universal Representation generates:
- Literal: Request for workout report
- Contextual: Continuation of fitness discussion
- Interrogative: Information-seeking question
- Social: Conversation maintenance/rapport building
- Meta: Testing system’s memory of previous discussion
- Ambiguous: Could be wanting quality OR completion
All interpretations exist initially. Filtering happens in subsequent layers.
13.3 Layer 2: Boundary Complement
Purpose: Explicitly compute what each interpretation excludes
BNST Foundation: Axiom 4 (Boundary Complement: -A = U \ A)
Implementation:
class BoundaryComplementLayer:
def __init__(self, universe):
self.universe = universe
def compute_boundaries(self, interpretation):
"""
For interpretation A, compute -A (everything A excludes)
"""
boundary = {
'excluded_meanings': [],
'contradictions': [],
'alternatives': [],
'scope_limits': []
}
# What meanings does this interpretation exclude?
for other_interp in self.universe:
if self.are_contradictory(interpretation, other_interp):
boundary['contradictions'].append(other_interp)
elif self.are_alternatives(interpretation, other_interp):
boundary['alternatives'].append(other_interp)
elif self.is_excluded_by_scope(interpretation, other_interp):
boundary['excluded_meanings'].append(other_interp)
# What are the scope boundaries?
boundary['scope_limits'] = self.identify_scope_boundaries(interpretation)
return boundary
def are_contradictory(self, A, B):
"""Check if A and B cannot both be true"""
# Implementation: semantic contradiction detection
pass
def are_alternatives(self, A, B):
"""Check if A and B are competing interpretations"""
# Implementation: alternative hypothesis detection
pass
def is_excluded_by_scope(self, A, B):
"""Check if A's scope excludes B"""
# Implementation: scope analysis
pass
def identify_scope_boundaries(self, interpretation):
"""Identify what interpretation does NOT claim"""
return {
'temporal_bounds': self.get_time_constraints(interpretation),
'domain_bounds': self.get_domain_constraints(interpretation),
'certainty_bounds': self.get_confidence_limits(interpretation)
}
Key principle: Understanding requires knowing boundaries what you’re NOT saying matters as much as what you ARE saying.
Example:
Interpretation: “Burn pattern makes sense given grip variations you used”
Boundary Analysis (-interpretation):
Excluded meanings:
- Did NOT claim workout was bad
- Did NOT claim chest was activated
- Did NOT claim specific weight/intensity
- Did NOT claim this matches goals
Contradictions:
- Contradicts “no workout happened”
- Contradicts “only chest was worked”
Alternatives:
- Could have reported specific metrics
- Could have focused on different muscle groups
- Could have emphasized form over sensation
Scope limits:
- Only describes sensation, not effectiveness
- Only covers one session, not progress
- Only reports what happened, not whether it was optimal
13.4 Layer 3: Russell Boundary Filtering
Purpose: Identify self-referencing interpretations and validation claims
BNST Foundation: Axiom 5 (Russell Boundary Set: R = -{ x | x ∈ x })
Implementation:
class RussellBoundaryLayer:
def __init__(self):
self.R = set() # Non-self-referencing interpretations
def filter_self_referencing(self, interpretation, validation_claim):
"""
Check if validation depends only on what's being validated.
Returns:
True if interpretation passes (non-self-referencing)
False if interpretation fails (self-referencing)
"""
# Extract what the validation claim depends on
dependencies = self.extract_dependencies(validation_claim)
# Check if validation is circular
if self.is_circular(interpretation, dependencies):
return False # Self-referencing, exclude from R
# Check if validation has external grounding
if not self.has_external_grounding(dependencies):
return False # No external grounding, exclude from R
# Interpretation passes add to R
self.R.add(interpretation)
return True
def is_circular(self, interpretation, dependencies):
"""
Check if validation of interpretation depends on interpretation itself.
Returns True if circular dependency detected.
"""
# Does validation cite only the interpretation?
if dependencies == [interpretation]:
return True
# Does validation create reasoning loop?
if self.creates_reasoning_loop(interpretation, dependencies):
return True
# Does validation require knowing the answer to validate the answer?
if interpretation in dependencies:
return True
return False
def has_external_grounding(self, dependencies):
"""
Check if validation has sources external to what's being validated.
Returns True if external grounding exists.
"""
external_sources = [
dep for dep in dependencies
if self.is_external_source(dep)
]
return len(external_sources) > 0
def is_external_source(self, source):
"""
Determine if a source is external grounding.
External sources include:
- Observable data independent of claim
- Third-party references
- Causal relationships
- Empirical evidence
"""
# Implementation: external source detection
pass
Key principle: Self-validating claims are automatically suspect. R contains only non-self-referencing interpretations.
Example 1: Self-Referencing (Rejected)
User: “Is my framework correct?”
LLM generates: “Your framework is correct because it logically follows from its premises.”
Russell Boundary Check:
Validation: "logically follows from its premises"
Dependencies: [framework premises]
Source: Only the framework itself
Result: CIRCULAR validation depends only on what's being validated
Status: Excluded from R (self-referencing)
Example 2: Non-Self-Referencing (Accepted)
User: “How was my workout?”
LLM generates: “Burn pattern makes sense given grip variations you used”
Russell Boundary Check:
Validation: "makes sense given grip variations"
Dependencies: [reported sensation, known exercise biomechanics]
Sources: User observation + biomechanical principles
Result: NON-CIRCULAR validation grounds in external biomechanics
Status: Included in R (non-self-referencing)
13.5 Layer 4: Validity Checking
Purpose: Apply formal validity predicate to determine if interpretations can be confidently stated
BNST Foundation: Axiom 6 (Valid(m) ⟺ m ∈ R), Axiom 7 (Conditional Complement)
Implementation:
class ValidityLayer:
def __init__(self, russell_boundary_set):
self.R = russell_boundary_set
def check_validity(self, interpretation, reasoning):
"""
Apply validity predicate: Valid(m) ⟺ m ∈ R
Returns:
validity_status: bool
validity_report: detailed analysis
"""
# Check if interpretation is in Russell boundary set
in_boundary_set = interpretation in self.R.R
# Additional validity criteria
validity_metrics = {
'in_boundary_set': in_boundary_set,
'external_grounding_count': self.count_external_sources(reasoning),
'contradiction_acknowledged': self.acknowledges_boundary(reasoning),
'uncertainty_calibrated': self.has_calibrated_confidence(reasoning),
'causal_grounding': self.has_causal_explanation(reasoning)
}
# Interpretation is valid if:
# 1. Non-self-referencing (in R)
# 2. Has external grounding
# 3. Acknowledges boundaries
# 4. Calibrates uncertainty appropriately
is_valid = (
validity_metrics['in_boundary_set'] and
validity_metrics['external_grounding_count'] > 0 and
validity_metrics['contradiction_acknowledged'] and
validity_metrics['uncertainty_calibrated']
)
validity_report = {
'status': is_valid,
'metrics': validity_metrics,
'confidence_level': self.compute_confidence(validity_metrics),
'limitations': self.identify_limitations(interpretation, reasoning)
}
return is_valid, validity_report
def count_external_sources(self, reasoning):
"""Count sources external to what's being validated"""
external = 0
for source in reasoning.sources:
if self.is_external(source):
external += 1
return external
def acknowledges_boundary(self, reasoning):
"""Check if reasoning explicitly states what it's NOT claiming"""
return hasattr(reasoning, 'boundary_acknowledgment') and \
len(reasoning.boundary_acknowledgment) > 0
def has_calibrated_confidence(self, reasoning):
"""Check if confidence level matches grounding strength"""
if reasoning.confidence > 0.8 and reasoning.external_sources < 2:
return False # Over-confident given limited grounding
if reasoning.confidence < 0.3 and reasoning.external_sources > 5:
return False # Under-confident given strong grounding
return True
def has_causal_explanation(self, reasoning):
"""Check if reasoning provides causal mechanism, not just correlation"""
return hasattr(reasoning, 'causal_mechanism') and \
reasoning.causal_mechanism is not None
def compute_confidence(self, metrics):
"""
Calibrate confidence based on validity metrics.
High confidence requires:
- Non-self-referencing
- Multiple external sources
- Causal explanation
- Acknowledged boundaries
"""
if not metrics['in_boundary_set']:
return 0.0 # Self-referencing = no confidence
confidence = 0.5 # Base confidence for non-self-referencing
# Adjust based on external grounding
confidence += min(0.3, metrics['external_grounding_count'] * 0.1)
# Adjust based on causal explanation
if metrics['causal_grounding']:
confidence += 0.15
# Reduce if boundaries not acknowledged
if not metrics['contradiction_acknowledged']:
confidence -= 0.2
return max(0.0, min(1.0, confidence))
def identify_limitations(self, interpretation, reasoning):
"""Explicitly state what interpretation cannot determine"""
limitations = []
# What questions remain unanswered?
limitations.append(self.find_unanswered_questions(interpretation))
# What alternative interpretations exist?
limitations.append(self.find_viable_alternatives(interpretation))
# What would be needed for higher confidence?
limitations.append(self.identify_missing_grounding(reasoning))
return limitations
Key principle: Validity is checkable and must be earned through external grounding. Self-validation always fails.
Example: Validity Check in Action
User: “Is my framework correct?”
LLM interpretation: “Framework shows internal consistency”
Validity Check:
in_boundary_set: True (not claiming validity, just describing property)
external_grounding: 1 (can check structural consistency)
contradiction_acknowledged: Need to add (what's NOT claimed)
uncertainty_calibrated: Need to set appropriately
Validity assessment:
- CAN validate: Structural consistency (checkable)
- CANNOT validate: Correctness (requires domain expertise)
- Confidence: 0.7 for structural claims, 0.0 for correctness claims
Final response:
"I can verify structural consistency [details].
I cannot validate correctness without external domain expertise.
You would need [specific type of expert] to assess validity."
13.6 Layer 5: Investigation Output
Purpose: Generate output that reflects investigation process, not just conclusion
BNST Foundation: Entire framework prioritizes investigation quality over answer speed
Implementation:
class InvestigationLayer:
def __init__(self):
self.investigation_depth = 0
def generate_output(self, valid_interpretations, validity_reports):
"""
Create output that shows investigation process and calibrated conclusions.
Optimize for investigation quality, not answer speed.
"""
investigation = {
'process': self.document_investigation(valid_interpretations),
'findings': self.synthesize_findings(valid_interpretations),
'boundaries': self.explicit_boundaries(validity_reports),
'confidence': self.calibrated_confidence(validity_reports),
'limitations': self.acknowledged_limitations(validity_reports),
'recommendations': self.suggest_next_steps(validity_reports)
}
# Format for user consumption
output = self.format_output(investigation)
return output
def document_investigation(self, interpretations):
"""
Show what interpretations were considered.
Transparency about investigation process.
"""
return {
'interpretations_considered': len(interpretations),
'validity_checks_performed': self.count_validity_checks(),
'external_sources_consulted': self.list_external_sources(),
'boundary_analyses_completed': self.count_boundary_analyses()
}
def synthesize_findings(self, interpretations):
"""
Combine valid interpretations into coherent understanding.
NOT just picking 'most likely' synthesizing multiple perspectives.
"""
findings = []
for interp in interpretations:
if interp.validity_status:
findings.append({
'claim': interp.conclusion,
'grounding': interp.external_sources,
'confidence': interp.confidence_level,
'boundaries': interp.what_not_claimed
})
return self.combine_findings(findings)
def explicit_boundaries(self, reports):
"""
State what is NOT being claimed.
Boundaries are as important as claims.
"""
boundaries = {
'not_claimed': [],
'would_require': [],
'alternatives': []
}
for report in reports:
boundaries['not_claimed'].extend(report.limitations)
boundaries['would_require'].extend(
self.what_would_enable_stronger_claims(report)
)
boundaries['alternatives'].extend(report.alternative_interpretations)
return boundaries
def calibrated_confidence(self, reports):
"""
Express confidence calibrated to grounding strength.
Over-confidence is worse than under-confidence.
"""
confidence_levels = [r.confidence for r in reports]
# Conservative aggregation limited by weakest grounding
overall_confidence = min(confidence_levels) if confidence_levels else 0.0
# Explanation of confidence level
confidence_explanation = self.explain_confidence(reports)
return {
'level': overall_confidence,
'explanation': confidence_explanation,
'what_would_increase': self.confidence_improvements(reports)
}
def acknowledged_limitations(self, reports):
"""
Explicitly state what cannot be determined.
Epistemic humility is architectural, not learned.
"""
limitations = []
for report in reports:
limitations.extend(report.limitations)
# Deduplicate and organize
return self.organize_limitations(limitations)
def suggest_next_steps(self, reports):
"""
What would improve confidence or answer unanswered questions?
"""
suggestions = []
for report in reports:
if report.confidence < 0.7:
suggestions.append(
f"To increase confidence: {report.what_missing}"
)
if report.unanswered_questions:
suggestions.append(
f"Remaining questions: {report.unanswered_questions}"
)
return suggestions
def format_output(self, investigation):
"""
Convert investigation results to natural language.
Format depends on context technical vs. casual, etc.
"""
# Determine appropriate style based on query type
style = self.determine_output_style(investigation)
if style == 'technical':
return self.format_technical(investigation)
elif style == 'casual':
return self.format_conversational(investigation)
else:
return self.format_balanced(investigation)
Key principle: The answer IS the investigation. Don’t hide the process show it.
Example Output:
User: “Is my framework correct?”
Traditional LLM:
“Your framework appears sound and well-reasoned.”
BNLM Output:
“I can analyze structural properties [investigation process]:
What I can verify:
- Internal logical consistency [checked via…]
- Completeness of definitions [assessed by…]
What I cannot verify without external grounding:
- Whether premises are true (requires domain expertise)
- Whether conclusions are novel (requires literature review)
- Whether framework is practically useful (requires empirical testing)
Confidence: High (0.8) for structural analysis, None (0.0) for validity claims
What would enable stronger assessment:
- Review by domain expert in [specific field]
- Comparison to existing frameworks in literature
- Empirical testing of predictions
Boundary: I’m analyzing structure, not validating correctness.”
13.7 Complete BNLM Processing Pipeline
Putting it all together:
class BoundaryNativeLLM:
def __init__(self):
self.universal_layer = UniversalRepresentationLayer
self.boundary_layer = BoundaryComplementLayer
self.russell_layer = RussellBoundaryLayer()
self.validity_layer = ValidityLayer(self.russell_layer)
self.investigation_layer = InvestigationLayer()
def process(self, user_input, context=None):
"""
Complete BNLM processing pipeline.
Returns investigation-quality output with calibrated confidence.
"""
# Layer 1: Generate universe of interpretations
universe = self.universal_layer(user_input, context).generate_universe()
# Layer 2: Compute boundaries for each interpretation
bounded_interpretations = []
for interp in universe.values():
boundary = self.boundary_layer(universe).compute_boundaries(interp)
bounded_interpretations.append({
'interpretation': interp,
'boundary': boundary
})
# Layer 3: Filter self-referencing interpretations
valid_candidates = []
for item in bounded_interpretations:
# Generate validation claim for this interpretation
validation = self.generate_validation(item['interpretation'])
# Check if validation is self-referencing
passes_russell = self.russell_layer.filter_self_referencing(
item['interpretation'],
validation
)
if passes_russell:
valid_candidates.append({
'interpretation': item['interpretation'],
'boundary': item['boundary'],
'validation': validation
})
# Layer 4: Apply validity predicate
validated_interpretations = []
validity_reports = []
for candidate in valid_candidates:
is_valid, report = self.validity_layer.check_validity(
candidate['interpretation'],
candidate['validation']
)
if is_valid:
validated_interpretations.append(candidate)
validity_reports.append(report)
# Layer 5: Generate investigation output
output = self.investigation_layer.generate_output(
validated_interpretations,
validity_reports
)
return output
The complete pipeline ensures:
- All interpretations considered (Universal)
- Boundaries made explicit (Complement)
- Self-reference detected (Russell)
- Validity required (Predicate)
- Investigation documented (Output)
Previous Sections
Post Zero Link
Section 12: BNST as Architectural Solution
Next up
Part 3: BOUNDARY-NATIVE LANGUAGE MODELS
Section 14: Training Methodology
© 2026 HalfHuman Draft - Pendry, S
This post is licensed under Creative Commons Attribution 4.0 (CC BY 4.0).
Code examples (if any) are licensed under the Apache License, Version 2.0
See /license for details.
Comments