Compare commits

...

28 Commits

Author SHA1 Message Date
xushiwei
8ced14b9ec Merge pull request #1415 from goplus/dependabot/github_actions/actions/checkout-6
chore(deps): bump actions/checkout from 5 to 6
2025-11-22 06:51:04 +08:00
dependabot[bot]
6b865828a3 chore(deps): bump actions/checkout from 5 to 6
Bumps [actions/checkout](https://github.com/actions/checkout) from 5 to 6.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-21 00:21:12 +00:00
xushiwei
b25ae1b4e7 Merge pull request #1384 from luoliwoshang/feature/export-different-names-1378
cl: support //export with different symbol names on embedded targets
2025-11-17 08:10:09 +08:00
xushiwei
a6516de181 Merge pull request #1309 from MeteorsLiu/impl-baremetal-gc
feat: implement baremetal gc
2025-11-17 07:41:31 +08:00
xushiwei
4a268650fc Merge pull request #1400 from cpunion/build/refactor-main-module-generation
build: refactor main module generation and fix stdout/stderr null checks
2025-11-17 07:34:03 +08:00
Li Jie
7e4e53eb1c build: refactor emitStdioNobuf for performance and readability 2025-11-16 20:17:30 +08:00
Li Jie
1473ee98f7 build: add package and function docs 2025-11-16 20:16:16 +08:00
xgopilot
8c7e8b6290 build: apply review feedback on main module generation 2025-11-16 20:16:16 +08:00
xgopilot
2a4b2ef023 build: remove error return from genMainModule
The genMainModule function never returns an error, so simplified
its signature to return only Package.

Updated:
- genMainModule signature: (Package, error) -> Package
- Call site in build.go to not handle error
- Both test cases to remove error checking

Generated with [codeagent](https://github.com/qbox/codeagent)
Co-authored-by: cpunion <8459+cpunion@users.noreply.github.com>
2025-11-16 20:16:16 +08:00
xgopilot
7abb468592 fix: separate stdout and stderr null checks in emitStdioNobuf
The original code incorrectly used the stdout null check condition
for both stdout and stderr pointer selection. This caused incorrect
behavior when stderr is null but stdout is not, or vice-versa.

This fix separates the null checks for stdout and stderr into
independent conditions, ensuring each stream is properly selected
based on its own null status.

Generated with [codeagent](https://github.com/qbox/codeagent)
Co-authored-by: cpunion <8459+cpunion@users.noreply.github.com>
2025-11-16 20:16:16 +08:00
Li Jie
bf9c6abb23 build: refactor main module generation 2025-11-16 20:16:16 +08:00
Haolan
065126e270 feat: make defer tls stub for baremetal 2025-11-14 16:13:43 +08:00
Haolan
552156ff40 fix: adjust gc stats struct 2025-11-14 16:13:43 +08:00
Haolan
36e84196c6 test: add test for gc stats 2025-11-14 16:13:43 +08:00
Haolan
7f1e07755a ci: use llgo test instead
ci: use llgo test instead
2025-11-14 16:13:41 +08:00
Haolan
af27d0475d revert some unnecessary change 2025-11-14 16:13:41 +08:00
Haolan
bb29e8c768 docs: add commets for gc mutex
docs: add commets for tinygo gc
2025-11-14 16:13:38 +08:00
Haolan
eec2c271bd feat: replace println with gcPanic 2025-11-14 16:13:38 +08:00
Haolan
1ed924ed50 fix: add pthread GC support for baremetal
test: fix test logic

chore: format codes
2025-11-14 16:13:34 +08:00
Haolan
c46ca84122 revert disabling stdio buffer 2025-11-14 16:13:34 +08:00
Haolan
b36be05c1e fix: GC() signature 2025-11-14 16:13:34 +08:00
Haolan
66a537ad29 fix: add gc dummy mutex 2025-11-14 16:13:34 +08:00
Haolan
33a00dff1b fix: invalid import and improve tests
refactor: remove initGC

test: add test for GC

fix: invalid import in z_gc

ci: test baremetal GC for coverage

ci: test baremetal GC for coverage
2025-11-14 16:13:03 +08:00
Haolan
e4a69ce413 fix: disable buffers 2025-11-14 16:13:03 +08:00
Haolan
531f69ae6a fix: bdwgc.init() causing archive mode building fail
P

P

P
2025-11-14 16:13:03 +08:00
Haolan
812dfd45c9 feat: implement baremetal GC
fix: pthread gc

fix: xiao-esp32c3 symbol

refactor: use clite memset instead of linking

fix: stack top symbol
2025-11-14 16:12:56 +08:00
luoliwoshang
e11ae0e21b test: add comprehensive tests and CI for export feature
Add extensive test coverage, demo program, and CI integration for
//export with different names feature:

Unit Tests (cl/builtin_test.go):
- TestHandleExportDiffName: core functionality with 4 scenarios
  * Different names with enableExportRename
  * Same names with enableExportRename
  * Different names with spaces in export directive
  * Matching names without enableExportRename
- TestInitLinknameByDocExportDiffNames: flag behavior verification
  * Export with different names when enabled
  * Export with same name when enabled
  * Normal linkname directives
- TestInitLinkExportDiffNames: edge case handling
  * Symbol not found in decl packages (silent handling)

Demo (_demo/embed/export/):
- Example program demonstrating various export patterns
- Verification script testing both embedded and non-embedded targets
- Documents expected behavior and error cases

CI Integration (.github/workflows/llgo.yml):
- Add export demo to embedded target tests
- Ensure feature works correctly across platforms
- Catch regressions in future changes

The tests verify:
✓ Different names work with -target flag
✓ Same names work in all cases
✓ Different names fail without -target flag
✓ Proper error messages for invalid exports
✓ Silent handling for decl packages

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-10 18:52:58 +08:00
luoliwoshang
dac3365c73 cl,build: support //export with different names on embedded targets
Add support for using different C symbol names in //export directives
when compiling for embedded targets (enabled via -target flag).

This is for TinyGo compatibility on embedded platforms where hardware
interrupt handlers often require specific naming conventions.

Implementation:
- Add EnableExportRename() to control export rename behavior
- Add isExport parameter to initLink callback for context awareness
- Update initLinknameByDoc() to handle export rename logic:
  * When isExport && enableExportRename: allow different names
  * Otherwise: require name match
- Clean error handling in initLink():
  * export + enableExportRename: silent return (already processed)
  * export + !enableExportRename: panic with clear error message
  * other cases: warning for missing linkname

Example:
  //export LPSPI2_IRQHandler
  func interruptLPSPI2() { ... }

The Go function is named interruptLPSPI2 but exported as
LPSPI2_IRQHandler for the hardware vector table.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-10 18:52:36 +08:00
37 changed files with 2282 additions and 171 deletions

View File

@@ -18,7 +18,7 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- name: Set up Node.js
uses: actions/setup-node@v6
@@ -47,7 +47,7 @@ jobs:
- ubuntu-latest
runs-on: ${{matrix.os}}
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- name: Set up Go
uses: ./.github/actions/setup-go
@@ -84,7 +84,7 @@ jobs:
- ubuntu-latest
runs-on: ${{matrix.os}}
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- name: Set up Go
uses: actions/setup-go@v6
@@ -136,7 +136,7 @@ jobs:
- ubuntu-latest
runs-on: ${{matrix.os}}
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- name: Set up Go
uses: actions/setup-go@v6

View File

@@ -18,7 +18,7 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- name: Set up Go
uses: ./.github/actions/setup-go

View File

@@ -28,7 +28,7 @@ jobs:
llvm: [19]
runs-on: ${{matrix.os}}
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- name: Install dependencies
uses: ./.github/actions/setup-deps
with:

View File

@@ -46,7 +46,7 @@ jobs:
go: ["1.21.13", "1.22.12", "1.23.6", "1.24.2"]
runs-on: ${{matrix.os}}
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- name: Install dependencies
uses: ./.github/actions/setup-deps
with:
@@ -61,7 +61,7 @@ jobs:
if ${{ startsWith(matrix.os, 'macos') }}; then
DEMO_PKG="cargs_darwin_arm64.zip"
else
DEMO_PKG="cargs_linux_amd64.zip"
DEMO_PKG="cargs_linux_amd64.zip"
fi
mkdir -p ./_demo/c/cargs/libs
@@ -133,6 +133,13 @@ jobs:
chmod +x test.sh
./test.sh
- name: Test export with different symbol names on embedded targets
run: |
echo "Testing //export with different symbol names on embedded targets..."
cd _demo/embed/export
chmod +x verify_export.sh
./verify_export.sh
- name: _xtool build tests
run: |
cd _xtool
@@ -159,7 +166,7 @@ jobs:
go: ["1.24.2"]
runs-on: ${{matrix.os}}
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- name: Install dependencies
uses: ./.github/actions/setup-deps
with:
@@ -186,11 +193,15 @@ jobs:
uses: actions/setup-go@v6
with:
go-version: ${{matrix.go}}
- name: Test Baremetal GC
if: ${{!startsWith(matrix.os, 'macos')}}
working-directory: runtime/internal/runtime/tinygogc
run: llgo test -tags testGC .
- name: run llgo test
run: |
llgo test ./...
hello:
continue-on-error: true
timeout-minutes: 30
@@ -201,7 +212,7 @@ jobs:
go: ["1.21.13", "1.22.12", "1.23.6", "1.24.2"]
runs-on: ${{matrix.os}}
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- name: Install dependencies
uses: ./.github/actions/setup-deps
with:
@@ -259,7 +270,7 @@ jobs:
llvm: [19]
runs-on: ${{matrix.os}}
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- name: Install dependencies
uses: ./.github/actions/setup-deps
with:

View File

@@ -21,7 +21,7 @@ jobs:
linux-cache-key: ${{ steps.cache-keys.outputs.linux-key }}
steps:
- name: Check out code
uses: actions/checkout@v5
uses: actions/checkout@v6
- name: Calculate cache keys
id: cache-keys
run: |
@@ -33,7 +33,7 @@ jobs:
timeout-minutes: 30
steps:
- name: Check out code
uses: actions/checkout@v5
uses: actions/checkout@v6
- name: Check Linux sysroot cache
id: cache-linux-sysroot
uses: actions/cache/restore@v4
@@ -63,7 +63,7 @@ jobs:
needs: [setup, populate-linux-sysroot]
steps:
- name: Check out code
uses: actions/checkout@v5
uses: actions/checkout@v6
- name: Set up Release
uses: ./.github/actions/setup-goreleaser
with:
@@ -148,7 +148,7 @@ jobs:
go-mod-version: "1.24"
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- name: Install dependencies
uses: ./.github/actions/setup-deps
with:
@@ -184,7 +184,7 @@ jobs:
if: startsWith(github.ref, 'refs/tags/')
steps:
- name: Check out code
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
fetch-depth: 0
- name: Set up Release

View File

@@ -26,7 +26,7 @@ jobs:
llvm: [19]
runs-on: ${{matrix.os}}
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- name: Install dependencies
uses: ./.github/actions/setup-deps
with:

View File

@@ -0,0 +1,67 @@
package main
import (
"github.com/goplus/lib/c"
)
// This demo shows how to use //export with different symbol names on embedded targets.
//
// On embedded targets, you can export Go functions with different C symbol names.
// This is useful for hardware interrupt handlers that require specific names.
// Standard Go export - same name
//
//export HelloWorld
func HelloWorld() {
c.Printf(c.Str("Hello from "))
c.Printf(c.Str("HelloWorld\n"))
}
// Embedded target export - different name
// Go function name: interruptLPSPI2
// Exported C symbol: LPSPI2_IRQHandler
//
//export LPSPI2_IRQHandler
func interruptLPSPI2() {
c.Printf(c.Str("LPSPI2 interrupt "))
c.Printf(c.Str("handler called\n"))
}
// Embedded target export - different name
// Go function name: systemTickHandler
// Exported C symbol: SysTick_Handler
//
//export SysTick_Handler
func systemTickHandler() {
c.Printf(c.Str("SysTick "))
c.Printf(c.Str("handler called\n"))
}
// Embedded target export - different name
// Go function name: Add
// Exported C symbol: AddFunc
//
//export AddFunc
func Add(a, b int) int {
result := a + b
c.Printf(c.Str("AddFunc(%d, %d) = %d\n"), a, b, result)
return result
}
func main() {
c.Printf(c.Str("=== Export Demo ===\n\n"))
// Call exported functions directly from Go
c.Printf(c.Str("Calling HelloWorld:\n"))
HelloWorld()
c.Printf(c.Str("\nSimulating hardware interrupts:\n"))
interruptLPSPI2()
systemTickHandler()
c.Printf(c.Str("\nTesting function with return value:\n"))
result := Add(10, 20)
c.Printf(c.Str("Result: %d\n"), result)
c.Printf(c.Str("\n=== Demo Complete ===\n"))
}

View File

@@ -0,0 +1,54 @@
#!/bin/bash
set -e
echo "Building for embedded target..."
# Build for embedded target as executable
# Use llgo directly instead of llgo.sh to avoid go.mod version check
llgo build -o test-verify --target=esp32 .
echo "Checking exported symbols..."
# Get exported symbols
exported_symbols=$(nm -gU ./test-verify.elf | grep -E "(HelloWorld|LPSPI2_IRQHandler|SysTick_Handler|AddFunc)" | awk '{print $NF}')
echo ""
echo "Exported symbols:"
echo "$exported_symbols" | awk '{print " " $0}'
echo ""
# Check expected symbols
expected=("HelloWorld" "LPSPI2_IRQHandler" "SysTick_Handler" "AddFunc")
missing=""
for symbol in "${expected[@]}"; do
if ! echo "$exported_symbols" | grep -q "^$symbol$"; then
missing="$missing $symbol"
fi
done
if [ -n "$missing" ]; then
echo "❌ Missing symbols:$missing"
exit 1
fi
echo "✅ Symbol name mapping verification:"
echo " HelloWorld -> HelloWorld"
echo " interruptLPSPI2 -> LPSPI2_IRQHandler"
echo " systemTickHandler -> SysTick_Handler"
echo " Add -> AddFunc"
echo ""
echo "🎉 All export symbols verified successfully!"
echo ""
echo "Testing that non-embedded target rejects different export names..."
# Build without --target should fail with panic
if llgo build -o test-notarget . 2>&1 | grep -q 'export comment has wrong name "LPSPI2_IRQHandler"'; then
echo "✅ Correctly rejected different export name on non-embedded target"
else
echo "❌ Should have panicked with 'export comment has wrong name' error"
exit 1
fi
echo ""
echo "Note: Different symbol names are only supported on embedded targets."

View File

@@ -393,10 +393,10 @@ func TestErrImport(t *testing.T) {
func TestErrInitLinkname(t *testing.T) {
var ctx context
ctx.initLinkname("//llgo:link abc", func(name string) (string, bool, bool) {
ctx.initLinkname("//llgo:link abc", func(name string, isExport bool) (string, bool, bool) {
return "", false, false
})
ctx.initLinkname("//go:linkname Printf printf", func(name string) (string, bool, bool) {
ctx.initLinkname("//go:linkname Printf printf", func(name string, isExport bool) (string, bool, bool) {
return "", false, false
})
defer func() {
@@ -404,7 +404,7 @@ func TestErrInitLinkname(t *testing.T) {
t.Fatal("initLinkname: no error?")
}
}()
ctx.initLinkname("//go:linkname Printf printf", func(name string) (string, bool, bool) {
ctx.initLinkname("//go:linkname Printf printf", func(name string, isExport bool) (string, bool, bool) {
return "foo.Printf", false, name == "Printf"
})
}
@@ -506,3 +506,238 @@ func TestInstantiate(t *testing.T) {
t.Fatal("error")
}
}
func TestHandleExportDiffName(t *testing.T) {
tests := []struct {
name string
enableExportRename bool
line string
fullName string
inPkgName string
wantHasLinkname bool
wantLinkname string
wantExport string
}{
{
name: "ExportDiffNames_DifferentName",
enableExportRename: true,
line: "//export IRQ_Handler",
fullName: "pkg.HandleInterrupt",
inPkgName: "HandleInterrupt",
wantHasLinkname: true,
wantLinkname: "IRQ_Handler",
wantExport: "IRQ_Handler",
},
{
name: "ExportDiffNames_SameName",
enableExportRename: true,
line: "//export SameName",
fullName: "pkg.SameName",
inPkgName: "SameName",
wantHasLinkname: true,
wantLinkname: "SameName",
wantExport: "SameName",
},
{
name: "ExportDiffNames_WithSpaces",
enableExportRename: true,
line: "//export Timer_Callback ",
fullName: "pkg.OnTimerTick",
inPkgName: "OnTimerTick",
wantHasLinkname: true,
wantLinkname: "Timer_Callback",
wantExport: "Timer_Callback",
},
{
name: "ExportDiffNames_Disabled_MatchingName",
enableExportRename: false,
line: "//export Func",
fullName: "pkg.Func",
inPkgName: "Func",
wantHasLinkname: true,
wantLinkname: "Func",
wantExport: "Func",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Save and restore global state
oldEnableExportRename := enableExportRename
defer func() {
EnableExportRename(oldEnableExportRename)
}()
EnableExportRename(tt.enableExportRename)
// Setup context
prog := llssa.NewProgram(nil)
pkg := prog.NewPackage("test", "test")
ctx := &context{
prog: prog,
pkg: pkg,
}
// Call initLinkname with closure that mimics initLinknameByDoc behavior
ret := ctx.initLinkname(tt.line, func(name string, isExport bool) (string, bool, bool) {
return tt.fullName, false, name == tt.inPkgName || (isExport && enableExportRename)
})
// Verify result
hasLinkname := (ret == hasLinkname)
if hasLinkname != tt.wantHasLinkname {
t.Errorf("hasLinkname = %v, want %v", hasLinkname, tt.wantHasLinkname)
}
if tt.wantHasLinkname {
// Check linkname was set
if link, ok := prog.Linkname(tt.fullName); !ok || link != tt.wantLinkname {
t.Errorf("linkname = %q (ok=%v), want %q", link, ok, tt.wantLinkname)
}
// Check export was set
exports := pkg.ExportFuncs()
if export, ok := exports[tt.fullName]; !ok || export != tt.wantExport {
t.Errorf("export = %q (ok=%v), want %q", export, ok, tt.wantExport)
}
}
})
}
}
func TestInitLinknameByDocExportDiffNames(t *testing.T) {
tests := []struct {
name string
enableExportRename bool
doc *ast.CommentGroup
fullName string
inPkgName string
wantExported bool // Whether the symbol should be exported with different name
wantLinkname string
wantExport string
}{
{
name: "WithExportDiffNames_DifferentNameExported",
enableExportRename: true,
doc: &ast.CommentGroup{
List: []*ast.Comment{
{Text: "//export IRQ_Handler"},
},
},
fullName: "pkg.HandleInterrupt",
inPkgName: "HandleInterrupt",
wantExported: true,
wantLinkname: "IRQ_Handler",
wantExport: "IRQ_Handler",
},
{
name: "WithoutExportDiffNames_NotExported",
enableExportRename: false,
doc: &ast.CommentGroup{
List: []*ast.Comment{
{Text: "//export DifferentName"},
},
},
fullName: "pkg.HandleInterrupt",
inPkgName: "HandleInterrupt",
wantExported: false,
// Without enableExportRename, it goes through normal flow which expects same name
// The symbol "DifferentName" won't be found, so no export happens
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Without enableExportRename, export with different name will panic
if !tt.wantExported && !tt.enableExportRename {
defer func() {
if r := recover(); r == nil {
t.Error("expected panic for export with different name when enableExportRename=false")
}
}()
}
// Save and restore global state
oldEnableExportRename := enableExportRename
defer func() {
EnableExportRename(oldEnableExportRename)
}()
EnableExportRename(tt.enableExportRename)
// Setup context
prog := llssa.NewProgram(nil)
pkg := prog.NewPackage("test", "test")
ctx := &context{
prog: prog,
pkg: pkg,
}
// Call initLinknameByDoc
ctx.initLinknameByDoc(tt.doc, tt.fullName, tt.inPkgName, false)
// Verify export behavior
exports := pkg.ExportFuncs()
if tt.wantExported {
// Should have exported the symbol with different name
if export, ok := exports[tt.fullName]; !ok || export != tt.wantExport {
t.Errorf("export = %q (ok=%v), want %q", export, ok, tt.wantExport)
}
// Check linkname was also set
if link, ok := prog.Linkname(tt.fullName); !ok || link != tt.wantLinkname {
t.Errorf("linkname = %q (ok=%v), want %q", link, ok, tt.wantLinkname)
}
}
})
}
}
func TestInitLinkExportDiffNames(t *testing.T) {
tests := []struct {
name string
enableExportRename bool
line string
wantPanic bool
}{
{
name: "ExportDiffNames_Enabled_NoError",
enableExportRename: true,
line: "//export IRQ_Handler",
wantPanic: false,
},
{
name: "ExportDiffNames_Disabled_Panic",
enableExportRename: false,
line: "//export IRQ_Handler",
wantPanic: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if tt.wantPanic {
defer func() {
if r := recover(); r == nil {
t.Error("expected panic but didn't panic")
}
}()
}
oldEnableExportRename := enableExportRename
defer func() {
EnableExportRename(oldEnableExportRename)
}()
EnableExportRename(tt.enableExportRename)
prog := llssa.NewProgram(nil)
pkg := prog.NewPackage("test", "test")
ctx := &context{
prog: prog,
pkg: pkg,
}
ctx.initLinkname(tt.line, func(inPkgName string, isExport bool) (fullName string, isVar, ok bool) {
// Simulate initLinknames scenario: symbol not found (like in decl packages)
return "", false, false
})
})
}
}

View File

@@ -53,6 +53,11 @@ var (
enableDbg bool
enableDbgSyms bool
disableInline bool
// enableExportRename enables //export to use different C symbol names than Go function names.
// This is for TinyGo compatibility when using -target flag for embedded targets.
// Currently, using -target implies TinyGo embedded target mode.
enableExportRename bool
)
// SetDebug sets debug flags.
@@ -73,6 +78,12 @@ func EnableTrace(b bool) {
enableCallTracing = b
}
// EnableExportRename enables or disables //export with different C symbol names.
// This is enabled when using -target flag for TinyGo compatibility.
func EnableExportRename(b bool) {
enableExportRename = b
}
// -----------------------------------------------------------------------------
type instrOrValue interface {

View File

@@ -73,7 +73,7 @@ func (p *pkgSymInfo) initLinknames(ctx *context) {
lines := bytes.Split(b, sep)
for _, line := range lines {
if bytes.HasPrefix(line, commentPrefix) {
ctx.initLinkname(string(line), func(inPkgName string) (fullName string, isVar, ok bool) {
ctx.initLinkname(string(line), func(inPkgName string, isExport bool) (fullName string, isVar, ok bool) {
if sym, ok := p.syms[inPkgName]; ok && file == sym.file {
return sym.fullName, sym.isVar, true
}
@@ -277,8 +277,8 @@ func (p *context) initLinknameByDoc(doc *ast.CommentGroup, fullName, inPkgName s
if doc != nil {
for n := len(doc.List) - 1; n >= 0; n-- {
line := doc.List[n].Text
ret := p.initLinkname(line, func(name string) (_ string, _, ok bool) {
return fullName, isVar, name == inPkgName
ret := p.initLinkname(line, func(name string, isExport bool) (_ string, _, ok bool) {
return fullName, isVar, name == inPkgName || (isExport && enableExportRename)
})
if ret != unknownDirective {
return ret == hasLinkname
@@ -294,7 +294,7 @@ const (
unknownDirective = -1
)
func (p *context) initLinkname(line string, f func(inPkgName string) (fullName string, isVar, ok bool)) int {
func (p *context) initLinkname(line string, f func(inPkgName string, isExport bool) (fullName string, isVar, ok bool)) int {
const (
linkname = "//go:linkname "
llgolink = "//llgo:link "
@@ -324,17 +324,24 @@ func (p *context) initLinkname(line string, f func(inPkgName string) (fullName s
return noDirective
}
func (p *context) initLink(line string, prefix int, export bool, f func(inPkgName string) (fullName string, isVar, ok bool)) {
func (p *context) initLink(line string, prefix int, export bool, f func(inPkgName string, isExport bool) (fullName string, isVar, ok bool)) {
text := strings.TrimSpace(line[prefix:])
if idx := strings.IndexByte(text, ' '); idx > 0 {
inPkgName := text[:idx]
if fullName, _, ok := f(inPkgName); ok {
if fullName, _, ok := f(inPkgName, export); ok {
link := strings.TrimLeft(text[idx+1:], " ")
p.prog.SetLinkname(fullName, link)
if export {
p.pkg.SetExport(fullName, link)
}
} else {
// Export with different names already processed by initLinknameByDoc
if export && enableExportRename {
return
}
if export {
panic(fmt.Sprintf("export comment has wrong name %q", inPkgName))
}
fmt.Fprintln(os.Stderr, "==>", line)
fmt.Fprintf(os.Stderr, "llgo: linkname %s not found and ignored\n", inPkgName)
}

View File

@@ -220,6 +220,11 @@ func Do(args []string, conf *Config) ([]Package, error) {
conf.Goarch = export.GOARCH
}
// Enable different export names for TinyGo compatibility when using -target
if conf.Target != "" {
cl.EnableExportRename(true)
}
verbose := conf.Verbose
patterns := args
tags := "llgo,math_big_pure_go"
@@ -782,11 +787,11 @@ func linkMainPkg(ctx *context, pkg *packages.Package, pkgs []*aPackage, outputPa
}
})
// Generate main module file (needed for global variables even in library modes)
entryObjFile, err := genMainModuleFile(ctx, llssa.PkgRuntime, pkg, needRuntime, needPyInit)
entryPkg := genMainModule(ctx, llssa.PkgRuntime, pkg, needRuntime, needPyInit)
entryObjFile, err := exportObject(ctx, entryPkg.PkgPath, entryPkg.ExportFile, []byte(entryPkg.LPkg.String()))
if err != nil {
return err
}
// defer os.Remove(entryLLFile)
objFiles = append(objFiles, entryObjFile)
// Compile extra files from target configuration
@@ -908,118 +913,6 @@ func needStart(ctx *context) bool {
}
}
func genMainModuleFile(ctx *context, rtPkgPath string, pkg *packages.Package, needRuntime, needPyInit bool) (path string, err error) {
var (
pyInitDecl string
pyInit string
rtInitDecl string
rtInit string
)
mainPkgPath := pkg.PkgPath
if needRuntime {
rtInit = "call void @\"" + rtPkgPath + ".init\"()"
rtInitDecl = "declare void @\"" + rtPkgPath + ".init\"()"
}
if needPyInit {
pyInit = "call void @Py_Initialize()"
pyInitDecl = "declare void @Py_Initialize()"
}
declSizeT := "%size_t = type i64"
if is32Bits(ctx.buildConf.Goarch) {
declSizeT = "%size_t = type i32"
}
stdioDecl := ""
stdioNobuf := ""
if IsStdioNobuf() {
stdioDecl = `
@stdout = external global ptr
@stderr = external global ptr
@__stdout = external global ptr
@__stderr = external global ptr
declare i32 @setvbuf(ptr, ptr, i32, %size_t)
`
stdioNobuf = `
; Set stdout with no buffer
%stdout_is_null = icmp eq ptr @stdout, null
%stdout_ptr = select i1 %stdout_is_null, ptr @__stdout, ptr @stdout
call i32 @setvbuf(ptr %stdout_ptr, ptr null, i32 2, %size_t 0)
; Set stderr with no buffer
%stderr_ptr = select i1 %stdout_is_null, ptr @__stderr, ptr @stderr
call i32 @setvbuf(ptr %stderr_ptr, ptr null, i32 2, %size_t 0)
`
}
// TODO(lijie): workaround for libc-free
// Remove main/_start when -buildmode and libc are ready
startDefine := `
define weak void @_start() {
; argc = 0
%argc = add i32 0, 0
; argv = null
%argv = inttoptr i64 0 to i8**
call i32 @main(i32 %argc, i8** %argv)
ret void
}
`
mainDefine := "define i32 @main(i32 noundef %0, ptr nocapture noundef readnone %1) local_unnamed_addr"
if !needStart(ctx) && isWasmTarget(ctx.buildConf.Goos) {
mainDefine = "define hidden noundef i32 @__main_argc_argv(i32 noundef %0, ptr nocapture noundef readnone %1) local_unnamed_addr"
}
if !needStart(ctx) {
startDefine = ""
}
var mainCode string
// For library modes (c-archive, c-shared), only generate global variables
if ctx.buildConf.BuildMode != BuildModeExe {
mainCode = `; ModuleID = 'main'
source_filename = "main"
@__llgo_argc = global i32 0, align 4
@__llgo_argv = global ptr null, align 8
`
} else {
// For executable mode, generate full main function
mainCode = fmt.Sprintf(`; ModuleID = 'main'
source_filename = "main"
%s
@__llgo_argc = global i32 0, align 4
@__llgo_argv = global ptr null, align 8
%s
%s
%s
declare void @"%s.init"()
declare void @"%s.main"()
define weak void @runtime.init() {
ret void
}
; TODO(lijie): workaround for syscall patch
define weak void @"syscall.init"() {
ret void
}
%s
%s {
_llgo_0:
store i32 %%0, ptr @__llgo_argc, align 4
store ptr %%1, ptr @__llgo_argv, align 8
%s
%s
%s
call void @runtime.init()
call void @"%s.init"()
call void @"%s.main"()
ret i32 0
}
`, declSizeT, stdioDecl,
pyInitDecl, rtInitDecl, mainPkgPath, mainPkgPath,
startDefine, mainDefine, stdioNobuf,
pyInit, rtInit, mainPkgPath, mainPkgPath)
}
return exportObject(ctx, pkg.PkgPath+".main", pkg.ExportFile+"-main", []byte(mainCode))
}
func is32Bits(goarch string) bool {
return goarch == "386" || goarch == "arm" || goarch == "mips" || goarch == "wasm"
}

View File

@@ -159,3 +159,8 @@ func runBinary(t *testing.T, path string, args ...string) string {
}
return string(output)
}
func TestRunPrintfWithStdioNobuf(t *testing.T) {
t.Setenv(llgoStdioNobuf, "1")
mockRun([]string{"../../cl/_testdata/printf"}, &Config{Mode: ModeRun})
}

View File

@@ -0,0 +1,231 @@
//go:build !llgo
// +build !llgo
/*
* Copyright (c) 2024 The GoPlus Authors (goplus.org). All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
// Package build contains the llgo compiler build orchestration logic.
//
// The main_module.go file generates the entry point module for llgo programs,
// which contains the main() function, initialization sequence, and global
// variables like argc/argv. This module is generated differently depending on
// BuildMode (exe, c-archive, c-shared).
package build
import (
"go/token"
"go/types"
"github.com/goplus/llgo/internal/packages"
llvm "github.com/goplus/llvm"
llssa "github.com/goplus/llgo/ssa"
)
// genMainModule generates the main entry module for an llgo program.
//
// The module contains argc/argv globals and, for executable build modes,
// the entry function that wires initialization and main. For C archive or
// shared library modes, only the globals are emitted.
func genMainModule(ctx *context, rtPkgPath string, pkg *packages.Package, needRuntime, needPyInit bool) Package {
prog := ctx.prog
mainPkg := prog.NewPackage("", pkg.ID+".main")
argcVar := mainPkg.NewVarEx("__llgo_argc", prog.Pointer(prog.Int32()))
argcVar.Init(prog.Zero(prog.Int32()))
argvValueType := prog.Pointer(prog.CStr())
argvVar := mainPkg.NewVarEx("__llgo_argv", prog.Pointer(argvValueType))
argvVar.InitNil()
exportFile := pkg.ExportFile
if exportFile == "" {
exportFile = pkg.PkgPath
}
mainAPkg := &aPackage{
Package: &packages.Package{
PkgPath: pkg.PkgPath + ".main",
ExportFile: exportFile + "-main",
},
LPkg: mainPkg,
}
if ctx.buildConf.BuildMode != BuildModeExe {
return mainAPkg
}
runtimeStub := defineWeakNoArgStub(mainPkg, "runtime.init")
// TODO(lijie): workaround for syscall patch
defineWeakNoArgStub(mainPkg, "syscall.init")
var pyInit llssa.Function
if needPyInit {
pyInit = declareNoArgFunc(mainPkg, "Py_Initialize")
}
var rtInit llssa.Function
if needRuntime {
rtInit = declareNoArgFunc(mainPkg, rtPkgPath+".init")
}
mainInit := declareNoArgFunc(mainPkg, pkg.PkgPath+".init")
mainMain := declareNoArgFunc(mainPkg, pkg.PkgPath+".main")
entryFn := defineEntryFunction(ctx, mainPkg, argcVar, argvVar, argvValueType, runtimeStub, mainInit, mainMain, pyInit, rtInit)
if needStart(ctx) {
defineStart(mainPkg, entryFn, argvValueType)
}
return mainAPkg
}
// defineEntryFunction creates the program's entry function. The name is
// "main" for standard targets, or "__main_argc_argv" with hidden visibility
// for WASM targets that don't require _start.
//
// The entry stores argc/argv, optionally disables stdio buffering, runs
// initialization hooks (Python, runtime, package init), and finally calls
// main.main before returning 0.
func defineEntryFunction(ctx *context, pkg llssa.Package, argcVar, argvVar llssa.Global, argvType llssa.Type, runtimeStub, mainInit, mainMain llssa.Function, pyInit, rtInit llssa.Function) llssa.Function {
prog := pkg.Prog
entryName := "main"
if !needStart(ctx) && isWasmTarget(ctx.buildConf.Goos) {
entryName = "__main_argc_argv"
}
sig := newEntrySignature(argvType.RawType())
fn := pkg.NewFunc(entryName, sig, llssa.InC)
fnVal := pkg.Module().NamedFunction(entryName)
if entryName != "main" {
fnVal.SetVisibility(llvm.HiddenVisibility)
fnVal.SetUnnamedAddr(true)
}
b := fn.MakeBody(1)
b.Store(argcVar.Expr, fn.Param(0))
b.Store(argvVar.Expr, fn.Param(1))
if IsStdioNobuf() {
emitStdioNobuf(b, pkg, ctx.buildConf.Goos)
}
if pyInit != nil {
b.Call(pyInit.Expr)
}
if rtInit != nil {
b.Call(rtInit.Expr)
}
b.Call(runtimeStub.Expr)
b.Call(mainInit.Expr)
b.Call(mainMain.Expr)
b.Return(prog.IntVal(0, prog.Int32()))
return fn
}
func defineStart(pkg llssa.Package, entry llssa.Function, argvType llssa.Type) {
fn := pkg.NewFunc("_start", llssa.NoArgsNoRet, llssa.InC)
pkg.Module().NamedFunction("_start").SetLinkage(llvm.WeakAnyLinkage)
b := fn.MakeBody(1)
prog := pkg.Prog
b.Call(entry.Expr, prog.IntVal(0, prog.Int32()), prog.Nil(argvType))
b.Return()
}
func declareNoArgFunc(pkg llssa.Package, name string) llssa.Function {
return pkg.NewFunc(name, llssa.NoArgsNoRet, llssa.InC)
}
func defineWeakNoArgStub(pkg llssa.Package, name string) llssa.Function {
fn := pkg.NewFunc(name, llssa.NoArgsNoRet, llssa.InC)
pkg.Module().NamedFunction(name).SetLinkage(llvm.WeakAnyLinkage)
b := fn.MakeBody(1)
b.Return()
return fn
}
const (
// ioNoBuf represents the _IONBF flag for setvbuf (no buffering)
ioNoBuf = 2
)
// emitStdioNobuf generates code to disable buffering on stdout and stderr
// when the LLGO_STDIO_NOBUF environment variable is set. Only Darwin uses
// the alternate `__stdoutp`/`__stderrp` symbols; other targets rely on the
// standard `stdout`/`stderr` globals.
func emitStdioNobuf(b llssa.Builder, pkg llssa.Package, goos string) {
prog := pkg.Prog
streamType := prog.VoidPtr()
streamPtrType := prog.Pointer(streamType)
stdoutName := "stdout"
stderrName := "stderr"
if goos == "darwin" {
stdoutName = "__stdoutp"
stderrName = "__stderrp"
}
stdout := declareExternalPtrGlobal(pkg, stdoutName, streamPtrType)
stderr := declareExternalPtrGlobal(pkg, stderrName, streamPtrType)
stdoutPtr := b.Load(stdout)
stderrPtr := b.Load(stderr)
sizeType := prog.Uintptr()
setvbuf := declareSetvbuf(pkg, streamPtrType, prog.CStr(), prog.Int32(), sizeType)
noBufMode := prog.IntVal(ioNoBuf, prog.Int32())
zeroSize := prog.Zero(sizeType)
nullBuf := prog.Nil(prog.CStr())
b.Call(setvbuf.Expr, stdoutPtr, nullBuf, noBufMode, zeroSize)
b.Call(setvbuf.Expr, stderrPtr, nullBuf, noBufMode, zeroSize)
}
func declareExternalPtrGlobal(pkg llssa.Package, name string, valueType llssa.Type) llssa.Expr {
global := pkg.NewVarEx(name, valueType)
pkg.Module().NamedGlobal(name).SetLinkage(llvm.ExternalLinkage)
return global.Expr
}
func declareSetvbuf(pkg llssa.Package, streamPtrType, bufPtrType, intType, sizeType llssa.Type) llssa.Function {
sig := newSignature(
[]types.Type{
streamPtrType.RawType(),
bufPtrType.RawType(),
intType.RawType(),
sizeType.RawType(),
},
[]types.Type{intType.RawType()},
)
return pkg.NewFunc("setvbuf", sig, llssa.InC)
}
func tupleOf(tys ...types.Type) *types.Tuple {
if len(tys) == 0 {
return types.NewTuple()
}
vars := make([]*types.Var, len(tys))
for i, t := range tys {
vars[i] = types.NewParam(token.NoPos, nil, "", t)
}
return types.NewTuple(vars...)
}
func newSignature(params []types.Type, results []types.Type) *types.Signature {
return types.NewSignatureType(nil, nil, nil, tupleOf(params...), tupleOf(results...), false)
}
func newEntrySignature(argvType types.Type) *types.Signature {
return newSignature(
[]types.Type{types.Typ[types.Int32], argvType},
[]types.Type{types.Typ[types.Int32]},
)
}

View File

@@ -0,0 +1,70 @@
//go:build !llgo
// +build !llgo
package build
import (
"strings"
"testing"
"github.com/goplus/llvm"
"github.com/goplus/llgo/internal/packages"
llssa "github.com/goplus/llgo/ssa"
)
func init() {
llssa.Initialize(llssa.InitAll)
}
func TestGenMainModuleExecutable(t *testing.T) {
llvm.InitializeAllTargets()
t.Setenv(llgoStdioNobuf, "")
ctx := &context{
prog: llssa.NewProgram(nil),
buildConf: &Config{
BuildMode: BuildModeExe,
Goos: "linux",
Goarch: "amd64",
},
}
pkg := &packages.Package{PkgPath: "example.com/foo", ExportFile: "foo.a"}
mod := genMainModule(ctx, llssa.PkgRuntime, pkg, true, true)
if mod.ExportFile != "foo.a-main" {
t.Fatalf("unexpected export file: %s", mod.ExportFile)
}
ir := mod.LPkg.String()
checks := []string{
"define i32 @main(",
"call void @Py_Initialize()",
"call void @\"example.com/foo.init\"()",
"define weak void @_start()",
}
for _, want := range checks {
if !strings.Contains(ir, want) {
t.Fatalf("main module IR missing %q:\n%s", want, ir)
}
}
}
func TestGenMainModuleLibrary(t *testing.T) {
llvm.InitializeAllTargets()
t.Setenv(llgoStdioNobuf, "")
ctx := &context{
prog: llssa.NewProgram(nil),
buildConf: &Config{
BuildMode: BuildModeCArchive,
Goos: "linux",
Goarch: "amd64",
},
}
pkg := &packages.Package{PkgPath: "example.com/foo", ExportFile: "foo.a"}
mod := genMainModule(ctx, llssa.PkgRuntime, pkg, false, false)
ir := mod.LPkg.String()
if strings.Contains(ir, "define i32 @main") {
t.Fatalf("library mode should not emit main function:\n%s", ir)
}
if !strings.Contains(ir, "@__llgo_argc = global i32 0") {
t.Fatalf("library mode missing argc global:\n%s", ir)
}
}

View File

@@ -1,5 +1,4 @@
//go:build !nogc
// +build !nogc
//go:build !nogc && !baremetal
/*
* Copyright (c) 2024 The GoPlus Authors (goplus.org). All rights reserved.

View File

@@ -1,5 +1,4 @@
//go:build nogc
// +build nogc
//go:build nogc || baremetal
/*
* Copyright (c) 2024 The GoPlus Authors (goplus.org). All rights reserved.

View File

@@ -1,4 +1,4 @@
//go:build llgo
//go:build llgo && !baremetal
/*
* Copyright (c) 2025 The GoPlus Authors (goplus.org). All rights reserved.

View File

@@ -1,4 +1,4 @@
//go:build llgo && !nogc
//go:build llgo && !baremetal && !nogc
/*
* Copyright (c) 2025 The GoPlus Authors (goplus.org). All rights reserved.

View File

@@ -1,4 +1,4 @@
//go:build llgo && nogc
//go:build llgo && (nogc || baremetal)
/*
* Copyright (c) 2025 The GoPlus Authors (goplus.org). All rights reserved.

View File

@@ -1,4 +1,4 @@
//go:build !llgo
//go:build !llgo || baremetal
/*
* Copyright (c) 2025 The GoPlus Authors (goplus.org). All rights reserved.

View File

@@ -4,8 +4,6 @@
package runtime
import "runtime"
// Layout of in-memory per-function information prepared by linker
// See https://golang.org/s/go12symtab.
// Keep in sync with linker (../cmd/link/internal/ld/pcln.go:/pclntab)
@@ -30,10 +28,6 @@ func StopTrace() {
panic("todo: runtime.StopTrace")
}
func ReadMemStats(m *runtime.MemStats) {
panic("todo: runtime.ReadMemStats")
}
func SetMutexProfileFraction(rate int) int {
panic("todo: runtime.SetMutexProfileFraction")
}

View File

@@ -1,8 +1,16 @@
//go:build !nogc
//go:build !nogc && !baremetal
package runtime
import "github.com/goplus/llgo/runtime/internal/clite/bdwgc"
import (
"runtime"
"github.com/goplus/llgo/runtime/internal/clite/bdwgc"
)
func ReadMemStats(m *runtime.MemStats) {
panic("todo: runtime.ReadMemStats")
}
func GC() {
bdwgc.Gcollect()

View File

@@ -0,0 +1,29 @@
//go:build !nogc && baremetal
package runtime
import (
"runtime"
"github.com/goplus/llgo/runtime/internal/runtime/tinygogc"
)
func ReadMemStats(m *runtime.MemStats) {
stats := tinygogc.ReadGCStats()
m.Alloc = stats.Alloc
m.TotalAlloc = stats.TotalAlloc
m.Sys = stats.Sys
m.Mallocs = stats.Mallocs
m.Frees = stats.Frees
m.HeapAlloc = stats.HeapAlloc
m.HeapSys = stats.HeapSys
m.HeapIdle = stats.HeapIdle
m.HeapInuse = stats.HeapInuse
m.StackInuse = stats.StackInuse
m.StackSys = stats.StackSys
m.GCSys = stats.GCSys
}
func GC() {
tinygogc.GC()
}

View File

@@ -0,0 +1,184 @@
//go:build baremetal || testGC
package tinygogc
import "unsafe"
type GCStats struct {
// General statistics.
// Alloc is bytes of allocated heap objects.
//
// This is the same as HeapAlloc (see below).
Alloc uint64
// TotalAlloc is cumulative bytes allocated for heap objects.
//
// TotalAlloc increases as heap objects are allocated, but
// unlike Alloc and HeapAlloc, it does not decrease when
// objects are freed.
TotalAlloc uint64
// Sys is the total bytes of memory obtained from the OS.
//
// Sys is the sum of the XSys fields below. Sys measures the
// virtual address space reserved by the Go runtime for the
// heap, stacks, and other internal data structures. It's
// likely that not all of the virtual address space is backed
// by physical memory at any given moment, though in general
// it all was at some point.
Sys uint64
// Mallocs is the cumulative count of heap objects allocated.
// The number of live objects is Mallocs - Frees.
Mallocs uint64
// Frees is the cumulative count of heap objects freed.
Frees uint64
// Heap memory statistics.
//
// Interpreting the heap statistics requires some knowledge of
// how Go organizes memory. Go divides the virtual address
// space of the heap into "spans", which are contiguous
// regions of memory 8K or larger. A span may be in one of
// three states:
//
// An "idle" span contains no objects or other data. The
// physical memory backing an idle span can be released back
// to the OS (but the virtual address space never is), or it
// can be converted into an "in use" or "stack" span.
//
// An "in use" span contains at least one heap object and may
// have free space available to allocate more heap objects.
//
// A "stack" span is used for goroutine stacks. Stack spans
// are not considered part of the heap. A span can change
// between heap and stack memory; it is never used for both
// simultaneously.
// HeapAlloc is bytes of allocated heap objects.
//
// "Allocated" heap objects include all reachable objects, as
// well as unreachable objects that the garbage collector has
// not yet freed. Specifically, HeapAlloc increases as heap
// objects are allocated and decreases as the heap is swept
// and unreachable objects are freed. Sweeping occurs
// incrementally between GC cycles, so these two processes
// occur simultaneously, and as a result HeapAlloc tends to
// change smoothly (in contrast with the sawtooth that is
// typical of stop-the-world garbage collectors).
HeapAlloc uint64
// HeapSys is bytes of heap memory obtained from the OS.
//
// HeapSys measures the amount of virtual address space
// reserved for the heap. This includes virtual address space
// that has been reserved but not yet used, which consumes no
// physical memory, but tends to be small, as well as virtual
// address space for which the physical memory has been
// returned to the OS after it became unused (see HeapReleased
// for a measure of the latter).
//
// HeapSys estimates the largest size the heap has had.
HeapSys uint64
// HeapIdle is bytes in idle (unused) spans.
//
// Idle spans have no objects in them. These spans could be
// (and may already have been) returned to the OS, or they can
// be reused for heap allocations, or they can be reused as
// stack memory.
//
// HeapIdle minus HeapReleased estimates the amount of memory
// that could be returned to the OS, but is being retained by
// the runtime so it can grow the heap without requesting more
// memory from the OS. If this difference is significantly
// larger than the heap size, it indicates there was a recent
// transient spike in live heap size.
HeapIdle uint64
// HeapInuse is bytes in in-use spans.
//
// In-use spans have at least one object in them. These spans
// can only be used for other objects of roughly the same
// size.
//
// HeapInuse minus HeapAlloc estimates the amount of memory
// that has been dedicated to particular size classes, but is
// not currently being used. This is an upper bound on
// fragmentation, but in general this memory can be reused
// efficiently.
HeapInuse uint64
// Stack memory statistics.
//
// Stacks are not considered part of the heap, but the runtime
// can reuse a span of heap memory for stack memory, and
// vice-versa.
// StackInuse is bytes in stack spans.
//
// In-use stack spans have at least one stack in them. These
// spans can only be used for other stacks of the same size.
//
// There is no StackIdle because unused stack spans are
// returned to the heap (and hence counted toward HeapIdle).
StackInuse uint64
// StackSys is bytes of stack memory obtained from the OS.
//
// StackSys is StackInuse, plus any memory obtained directly
// from the OS for OS thread stacks.
//
// In non-cgo programs this metric is currently equal to StackInuse
// (but this should not be relied upon, and the value may change in
// the future).
//
// In cgo programs this metric includes OS thread stacks allocated
// directly from the OS. Currently, this only accounts for one stack in
// c-shared and c-archive build modes and other sources of stacks from
// the OS (notably, any allocated by C code) are not currently measured.
// Note this too may change in the future.
StackSys uint64
// GCSys is bytes of memory in garbage collection metadata.
GCSys uint64
}
func ReadGCStats() GCStats {
var heapInuse, heapIdle uint64
lock(&gcMutex)
for block := uintptr(0); block < endBlock; block++ {
bstate := gcStateOf(block)
if bstate == blockStateFree {
heapIdle += uint64(bytesPerBlock)
} else {
heapInuse += uint64(bytesPerBlock)
}
}
stackEnd := uintptr(unsafe.Pointer(&_stackEnd))
stackSys := stackTop - stackEnd
stats := GCStats{
Alloc: (gcTotalBlocks - gcFreedBlocks) * uint64(bytesPerBlock),
TotalAlloc: gcTotalAlloc,
Sys: uint64(heapEnd - heapStart),
Mallocs: gcMallocs,
Frees: gcFrees,
HeapAlloc: (gcTotalBlocks - gcFreedBlocks) * uint64(bytesPerBlock),
HeapSys: heapInuse + heapIdle,
HeapIdle: heapIdle,
HeapInuse: heapInuse,
StackInuse: uint64(stackTop - uintptr(getsp())),
StackSys: uint64(stackSys),
GCSys: uint64(heapEnd - uintptr(metadataStart)),
}
unlock(&gcMutex)
return stats
}

View File

@@ -0,0 +1,54 @@
//go:build !testGC
package tinygogc
import (
"unsafe"
_ "unsafe"
)
// LLGoPackage instructs the LLGo linker to wrap C standard library memory allocation
// functions (malloc, realloc, calloc) so they use the tinygogc allocator instead.
// This ensures all memory allocations go through the GC, including C library calls.
const LLGoPackage = "link: --wrap=malloc --wrap=realloc --wrap=calloc"
//export __wrap_malloc
func __wrap_malloc(size uintptr) unsafe.Pointer {
return Alloc(size)
}
//export __wrap_calloc
func __wrap_calloc(nmemb, size uintptr) unsafe.Pointer {
totalSize := nmemb * size
// Check for multiplication overflow
if nmemb != 0 && totalSize/nmemb != size {
return nil // Overflow
}
return Alloc(totalSize)
}
//export __wrap_realloc
func __wrap_realloc(ptr unsafe.Pointer, size uintptr) unsafe.Pointer {
return Realloc(ptr, size)
}
//go:linkname getsp llgo.stackSave
func getsp() unsafe.Pointer
//go:linkname _heapStart _heapStart
var _heapStart [0]byte
//go:linkname _heapEnd _heapEnd
var _heapEnd [0]byte
//go:linkname _stackStart _stack_top
var _stackStart [0]byte
//go:linkname _stackEnd _stack_end
var _stackEnd [0]byte
//go:linkname _globals_start _globals_start
var _globals_start [0]byte
//go:linkname _globals_end _globals_end
var _globals_end [0]byte

View File

@@ -0,0 +1,25 @@
//go:build testGC
package tinygogc
import (
_ "unsafe"
)
var currentStack uintptr
func getsp() uintptr {
return currentStack
}
var _heapStart [0]byte
var _heapEnd [0]byte
var _stackStart [0]byte
var _stackEnd [0]byte
var _globals_start [0]byte
var _globals_end [0]byte

View File

@@ -0,0 +1,570 @@
//go:build baremetal || testGC
/*
* Copyright (c) 2018-2025 The TinyGo Authors. All rights reserved.
* Copyright (c) 2024 The GoPlus Authors (goplus.org). All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
// Package tinygogc implements a conservative mark-and-sweep garbage collector
// for baremetal environments where the standard Go runtime and bdwgc are unavailable.
//
// This implementation is based on TinyGo's GC and is designed for resource-constrained
// embedded systems. It uses a block-based allocator with conservative pointer scanning.
//
// Build tags:
// - baremetal: Enables this GC for baremetal targets
// - testGC: Enables testing mode with mock implementations
//
// Memory Layout:
// The heap is divided into fixed-size blocks (32 bytes on 64-bit). Metadata is stored
// at the end of the heap, using 2 bits per block to track state (free/head/tail/mark).
package tinygogc
import (
"unsafe"
c "github.com/goplus/llgo/runtime/internal/clite"
)
const gcDebug = false
// blockState stores the four states in which a block can be. It is two bits in
// size.
const (
blockStateFree uint8 = 0 // 00
blockStateHead uint8 = 1 // 01
blockStateTail uint8 = 2 // 10
blockStateMark uint8 = 3 // 11
blockStateMask uint8 = 3 // 11
)
// The byte value of a block where every block is a 'tail' block.
const blockStateByteAllTails = 0 |
uint8(blockStateTail<<(stateBits*3)) |
uint8(blockStateTail<<(stateBits*2)) |
uint8(blockStateTail<<(stateBits*1)) |
uint8(blockStateTail<<(stateBits*0))
var (
heapStart uintptr // start address of heap area
heapEnd uintptr // end address of heap area
globalsStart uintptr // start address of global variable area
globalsEnd uintptr // end address of global variable area
stackTop uintptr // the top of stack
endBlock uintptr // GC end block index
metadataStart unsafe.Pointer // start address of GC metadata
nextAlloc uintptr // the next block that should be tried by the allocator
gcTotalAlloc uint64 // total number of bytes allocated
gcTotalBlocks uint64 // total number of allocated blocks
gcMallocs uint64 // total number of allocations
gcFrees uint64 // total number of objects freed
gcFreedBlocks uint64 // total number of freed blocks
// stackOverflow is a flag which is set when the GC scans too deep while marking.
// After it is set, all marked allocations must be re-scanned.
markStackOverflow bool
// zeroSizedAlloc is just a sentinel that gets returned when allocating 0 bytes.
zeroSizedAlloc uint8
gcMutex mutex // gcMutex protects GC related variables
isGCInit bool // isGCInit indicates GC initialization state
)
// Some globals + constants for the entire GC.
const (
wordsPerBlock = 4 // number of pointers in an allocated block
bytesPerBlock = wordsPerBlock * unsafe.Sizeof(heapStart)
stateBits = 2 // how many bits a block state takes (see blockState type)
blocksPerStateByte = 8 / stateBits
markStackSize = 8 * unsafe.Sizeof((*int)(nil)) // number of to-be-marked blocks to queue before forcing a rescan
)
// this function MUST be initalized first, which means it's required to be initalized before runtime
func initGC() {
// reserve 2K blocks for libc internal malloc, we cannot wrap those internal functions
heapStart = uintptr(unsafe.Pointer(&_heapStart)) + 2048
heapEnd = uintptr(unsafe.Pointer(&_heapEnd))
globalsStart = uintptr(unsafe.Pointer(&_globals_start))
globalsEnd = uintptr(unsafe.Pointer(&_globals_end))
totalSize := heapEnd - heapStart
metadataSize := (totalSize + blocksPerStateByte*bytesPerBlock) / (1 + blocksPerStateByte*bytesPerBlock)
metadataStart = unsafe.Pointer(heapEnd - metadataSize)
endBlock = (uintptr(metadataStart) - heapStart) / bytesPerBlock
stackTop = uintptr(unsafe.Pointer(&_stackStart))
c.Memset(metadataStart, 0, metadataSize)
}
func lazyInit() {
if !isGCInit {
initGC()
isGCInit = true
}
}
func gcPanic(s *c.Char) {
c.Printf(c.Str("%s"), s)
c.Exit(2)
}
// blockFromAddr returns a block given an address somewhere in the heap (which
// might not be heap-aligned).
func blockFromAddr(addr uintptr) uintptr {
if addr < heapStart || addr >= uintptr(metadataStart) {
gcPanic(c.Str("gc: trying to get block from invalid address"))
}
return (addr - heapStart) / bytesPerBlock
}
// Return a pointer to the start of the allocated object.
func gcPointerOf(blockAddr uintptr) unsafe.Pointer {
return unsafe.Pointer(gcAddressOf(blockAddr))
}
// Return the address of the start of the allocated object.
func gcAddressOf(blockAddr uintptr) uintptr {
addr := heapStart + blockAddr*bytesPerBlock
if addr > uintptr(metadataStart) {
gcPanic(c.Str("gc: block pointing inside metadata"))
}
return addr
}
// findHead returns the head (first block) of an object, assuming the block
// points to an allocated object. It returns the same block if this block
// already points to the head.
func gcFindHead(blockAddr uintptr) uintptr {
for {
// Optimization: check whether the current block state byte (which
// contains the state of multiple blocks) is composed entirely of tail
// blocks. If so, we can skip back to the last block in the previous
// state byte.
// This optimization speeds up findHead for pointers that point into a
// large allocation.
stateByte := gcStateByteOf(blockAddr)
if stateByte == blockStateByteAllTails {
blockAddr -= (blockAddr % blocksPerStateByte) + 1
continue
}
// Check whether we've found a non-tail block, which means we found the
// head.
state := gcStateFromByte(blockAddr, stateByte)
if state != blockStateTail {
break
}
blockAddr--
}
if gcStateOf(blockAddr) != blockStateHead && gcStateOf(blockAddr) != blockStateMark {
gcPanic(c.Str("gc: found tail without head"))
}
return blockAddr
}
// findNext returns the first block just past the end of the tail. This may or
// may not be the head of an object.
func gcFindNext(blockAddr uintptr) uintptr {
if gcStateOf(blockAddr) == blockStateHead || gcStateOf(blockAddr) == blockStateMark {
blockAddr++
}
for gcAddressOf(blockAddr) < uintptr(metadataStart) && gcStateOf(blockAddr) == blockStateTail {
blockAddr++
}
return blockAddr
}
func gcStateByteOf(blockAddr uintptr) byte {
return *(*uint8)(unsafe.Add(metadataStart, blockAddr/blocksPerStateByte))
}
// Return the block state given a state byte. The state byte must have been
// obtained using b.stateByte(), otherwise the result is incorrect.
func gcStateFromByte(blockAddr uintptr, stateByte byte) uint8 {
return uint8(stateByte>>((blockAddr%blocksPerStateByte)*stateBits)) & blockStateMask
}
// State returns the current block state.
func gcStateOf(blockAddr uintptr) uint8 {
return gcStateFromByte(blockAddr, gcStateByteOf(blockAddr))
}
// setState sets the current block to the given state, which must contain more
// bits than the current state. Allowed transitions: from free to any state and
// from head to mark.
func gcSetState(blockAddr uintptr, newState uint8) {
stateBytePtr := (*uint8)(unsafe.Add(metadataStart, blockAddr/blocksPerStateByte))
*stateBytePtr |= uint8(newState << ((blockAddr % blocksPerStateByte) * stateBits))
if gcStateOf(blockAddr) != newState {
gcPanic(c.Str("gc: setState() was not successful"))
}
}
// markFree sets the block state to free, no matter what state it was in before.
func gcMarkFree(blockAddr uintptr) {
stateBytePtr := (*uint8)(unsafe.Add(metadataStart, blockAddr/blocksPerStateByte))
*stateBytePtr &^= uint8(blockStateMask << ((blockAddr % blocksPerStateByte) * stateBits))
if gcStateOf(blockAddr) != blockStateFree {
gcPanic(c.Str("gc: markFree() was not successful"))
}
*(*[wordsPerBlock]uintptr)(unsafe.Pointer(gcAddressOf(blockAddr))) = [wordsPerBlock]uintptr{}
}
// unmark changes the state of the block from mark to head. It must be marked
// before calling this function.
func gcUnmark(blockAddr uintptr) {
if gcStateOf(blockAddr) != blockStateMark {
gcPanic(c.Str("gc: unmark() on a block that is not marked"))
}
clearMask := blockStateMask ^ blockStateHead // the bits to clear from the state
stateBytePtr := (*uint8)(unsafe.Add(metadataStart, blockAddr/blocksPerStateByte))
*stateBytePtr &^= uint8(clearMask << ((blockAddr % blocksPerStateByte) * stateBits))
if gcStateOf(blockAddr) != blockStateHead {
gcPanic(c.Str("gc: unmark() was not successful"))
}
}
func isOnHeap(ptr uintptr) bool {
return ptr >= heapStart && ptr < uintptr(metadataStart)
}
func isPointer(ptr uintptr) bool {
// TODO: implement precise GC
return isOnHeap(ptr)
}
// alloc tries to find some free space on the heap, possibly doing a garbage
// collection cycle if needed. If no space is free, it panics.
//
//go:noinline
func Alloc(size uintptr) unsafe.Pointer {
if size == 0 {
return unsafe.Pointer(&zeroSizedAlloc)
}
lock(&gcMutex)
lazyInit()
gcTotalAlloc += uint64(size)
gcMallocs++
neededBlocks := (size + (bytesPerBlock - 1)) / bytesPerBlock
gcTotalBlocks += uint64(neededBlocks)
// Continue looping until a run of free blocks has been found that fits the
// requested size.
index := nextAlloc
numFreeBlocks := uintptr(0)
heapScanCount := uint8(0)
for {
if index == nextAlloc {
if heapScanCount == 0 {
heapScanCount = 1
} else if heapScanCount == 1 {
// The entire heap has been searched for free memory, but none
// could be found. Run a garbage collection cycle to reclaim
// free memory and try again.
heapScanCount = 2
freeBytes := gc()
heapSize := uintptr(metadataStart) - heapStart
if freeBytes < heapSize/3 {
// Ensure there is at least 33% headroom.
// This percentage was arbitrarily chosen, and may need to
// be tuned in the future.
growHeap()
}
} else {
// Even after garbage collection, no free memory could be found.
// Try to increase heap size.
if growHeap() {
// Success, the heap was increased in size. Try again with a
// larger heap.
} else {
// Unfortunately the heap could not be increased. This
// happens on baremetal systems for example (where all
// available RAM has already been dedicated to the heap).
gcPanic(c.Str("out of memory"))
}
}
}
// Wrap around the end of the heap.
if index == endBlock {
index = 0
// Reset numFreeBlocks as allocations cannot wrap.
numFreeBlocks = 0
// In rare cases, the initial heap might be so small that there are
// no blocks at all. In this case, it's better to jump back to the
// start of the loop and try again, until the GC realizes there is
// no memory and grows the heap.
// This can sometimes happen on WebAssembly, where the initial heap
// is created by whatever is left on the last memory page.
continue
}
// Is the block we're looking at free?
if gcStateOf(index) != blockStateFree {
// This block is in use. Try again from this point.
numFreeBlocks = 0
index++
continue
}
numFreeBlocks++
index++
// Are we finished?
if numFreeBlocks == neededBlocks {
// Found a big enough range of free blocks!
nextAlloc = index
thisAlloc := index - neededBlocks
// Set the following blocks as being allocated.
gcSetState(thisAlloc, blockStateHead)
for i := thisAlloc + 1; i != nextAlloc; i++ {
gcSetState(i, blockStateTail)
}
unlock(&gcMutex)
// Return a pointer to this allocation.
return c.Memset(gcPointerOf(thisAlloc), 0, size)
}
}
}
func Realloc(ptr unsafe.Pointer, size uintptr) unsafe.Pointer {
if ptr == nil {
return Alloc(size)
}
lock(&gcMutex)
lazyInit()
unlock(&gcMutex)
ptrAddress := uintptr(ptr)
endOfTailAddress := gcAddressOf(gcFindNext(blockFromAddr(ptrAddress)))
// this might be a few bytes longer than the original size of
// ptr, because we align to full blocks of size bytesPerBlock
oldSize := endOfTailAddress - ptrAddress
if size <= oldSize {
return ptr
}
newAlloc := Alloc(size)
c.Memcpy(newAlloc, ptr, oldSize)
free(ptr)
return newAlloc
}
func free(ptr unsafe.Pointer) {
// TODO: free blocks on request, when the compiler knows they're unused.
}
func GC() uintptr {
lock(&gcMutex)
freeBytes := gc()
unlock(&gcMutex)
return freeBytes
}
// runGC performs a garbage collection cycle. It is the internal implementation
// of the runtime.GC() function. The difference is that it returns the number of
// free bytes in the heap after the GC is finished.
func gc() (freeBytes uintptr) {
lazyInit()
if gcDebug {
println("running collection cycle...")
}
// Mark phase: mark all reachable objects, recursively.
gcMarkReachable()
finishMark()
// If we're using threads, resume all other threads before starting the
// sweep.
gcResumeWorld()
// Sweep phase: free all non-marked objects and unmark marked objects for
// the next collection cycle.
freeBytes = sweep()
return
}
// markRoots reads all pointers from start to end (exclusive) and if they look
// like a heap pointer and are unmarked, marks them and scans that object as
// well (recursively). The start and end parameters must be valid pointers and
// must be aligned.
func markRoots(start, end uintptr) {
if start >= end {
gcPanic(c.Str("gc: unexpected range to mark"))
}
// Reduce the end bound to avoid reading too far on platforms where pointer alignment is smaller than pointer size.
// If the size of the range is 0, then end will be slightly below start after this.
end -= unsafe.Sizeof(end) - unsafe.Alignof(end)
for addr := start; addr < end; addr += unsafe.Alignof(addr) {
root := *(*uintptr)(unsafe.Pointer(addr))
markRoot(addr, root)
}
}
// startMark starts the marking process on a root and all of its children.
func startMark(root uintptr) {
var stack [markStackSize]uintptr
stack[0] = root
gcSetState(root, blockStateMark)
stackLen := 1
for stackLen > 0 {
// Pop a block off of the stack.
stackLen--
block := stack[stackLen]
start, end := gcAddressOf(block), gcAddressOf(gcFindNext(block))
for addr := start; addr != end; addr += unsafe.Alignof(addr) {
// Load the word.
word := *(*uintptr)(unsafe.Pointer(addr))
if !isPointer(word) {
// Not a heap pointer.
continue
}
// Find the corresponding memory block.
referencedBlock := blockFromAddr(word)
if gcStateOf(referencedBlock) == blockStateFree {
// The to-be-marked object doesn't actually exist.
// This is probably a false positive.
continue
}
// Move to the block's head.
referencedBlock = gcFindHead(referencedBlock)
if gcStateOf(referencedBlock) == blockStateMark {
// The block has already been marked by something else.
continue
}
// Mark block.
gcSetState(referencedBlock, blockStateMark)
if stackLen == len(stack) {
// The stack is full.
// It is necessary to rescan all marked blocks once we are done.
markStackOverflow = true
if gcDebug {
println("gc stack overflowed")
}
continue
}
// Push the pointer onto the stack to be scanned later.
stack[stackLen] = referencedBlock
stackLen++
}
}
}
// finishMark finishes the marking process by processing all stack overflows.
func finishMark() {
for markStackOverflow {
// Re-mark all blocks.
markStackOverflow = false
for block := uintptr(0); block < endBlock; block++ {
if gcStateOf(block) != blockStateMark {
// Block is not marked, so we do not need to rescan it.
continue
}
// Re-mark the block.
startMark(block)
}
}
}
// mark a GC root at the address addr.
func markRoot(addr, root uintptr) {
if isOnHeap(root) {
block := blockFromAddr(root)
if gcStateOf(block) == blockStateFree {
// The to-be-marked object doesn't actually exist.
// This could either be a dangling pointer (oops!) but most likely
// just a false positive.
return
}
head := gcFindHead(block)
if gcStateOf(head) != blockStateMark {
startMark(head)
}
}
}
// Sweep goes through all memory and frees unmarked
// It returns how many bytes are free in the heap after the sweep.
func sweep() (freeBytes uintptr) {
freeCurrentObject := false
var freed uint64
for block := uintptr(0); block < endBlock; block++ {
switch gcStateOf(block) {
case blockStateHead:
// Unmarked head. Free it, including all tail blocks following it.
gcMarkFree(block)
freeCurrentObject = true
gcFrees++
freed++
case blockStateTail:
if freeCurrentObject {
// This is a tail object following an unmarked head.
// Free it now.
gcMarkFree(block)
freed++
}
case blockStateMark:
// This is a marked object. The next tail blocks must not be freed,
// but the mark bit must be removed so the next GC cycle will
// collect this object if it is unreferenced then.
gcUnmark(block)
freeCurrentObject = false
case blockStateFree:
freeBytes += bytesPerBlock
}
}
gcFreedBlocks += freed
freeBytes += uintptr(freed) * bytesPerBlock
return
}
// growHeap tries to grow the heap size. It returns true if it succeeds, false
// otherwise.
func growHeap() bool {
// On baremetal, there is no way the heap can be grown.
return false
}
func gcMarkReachable() {
markRoots(uintptr(getsp()), stackTop)
markRoots(globalsStart, globalsEnd)
}
func gcResumeWorld() {
// Nothing to do here (single threaded).
}

View File

@@ -0,0 +1,8 @@
package tinygogc
// TODO(MeteorsLiu): mutex lock for baremetal GC
type mutex struct{}
func lock(m *mutex) {}
func unlock(m *mutex) {}

View File

@@ -0,0 +1,604 @@
//go:build testGC
package tinygogc
import (
"testing"
"unsafe"
c "github.com/goplus/llgo/runtime/internal/clite"
)
const (
// Mock a typical embedded system with 128KB RAM
mockHeapSize = 128 * 1024 // 128KB
mockGlobalsSize = 4 * 1024 // 4KB for globals
mockStackSize = 8 * 1024 // 8KB for stack
mockReservedSize = 2048 // 2KB reserved as in real implementation
)
type testObject struct {
data [4]uintptr
}
// mockGCEnv provides a controlled root environment for GC testing
type mockGCEnv struct {
memory []byte
heapStart uintptr
heapEnd uintptr
globalsStart uintptr
globalsEnd uintptr
stackStart uintptr
stackEnd uintptr
// Controlled root sets for testing
rootObjects []unsafe.Pointer
// Original GC state to restore
originalHeapStart uintptr
originalHeapEnd uintptr
originalGlobalsStart uintptr
originalGlobalsEnd uintptr
originalStackTop uintptr
originalEndBlock uintptr
originalMetadataStart unsafe.Pointer
originalNextAlloc uintptr
originalIsGCInit bool
// Mock mode flag
mockMode bool
}
// createMockGCEnv creates a completely isolated GC environment
func createMockGCEnv() *mockGCEnv {
totalMemory := mockHeapSize + mockGlobalsSize + mockStackSize
memory := make([]byte, totalMemory)
baseAddr := uintptr(unsafe.Pointer(&memory[0]))
env := &mockGCEnv{
memory: memory,
globalsStart: baseAddr,
globalsEnd: baseAddr + mockGlobalsSize,
heapStart: baseAddr + mockGlobalsSize + mockReservedSize,
heapEnd: baseAddr + mockGlobalsSize + mockHeapSize,
stackStart: baseAddr + mockGlobalsSize + mockHeapSize,
stackEnd: baseAddr + uintptr(totalMemory),
rootObjects: make([]unsafe.Pointer, 0),
mockMode: false,
}
return env
}
// setupMockGC initializes the GC with mock memory layout using initGC's logic
func (env *mockGCEnv) setupMockGC() {
// Save original GC state
env.originalHeapStart = heapStart
env.originalHeapEnd = heapEnd
env.originalGlobalsStart = globalsStart
env.originalGlobalsEnd = globalsEnd
env.originalStackTop = stackTop
env.originalEndBlock = endBlock
env.originalMetadataStart = metadataStart
env.originalNextAlloc = nextAlloc
env.originalIsGCInit = isGCInit
// Set currentStack for getsp()
currentStack = env.stackStart
// Apply initGC's logic with our mock memory layout
// This is the same logic as initGC() but with our mock addresses
heapStart = env.heapStart + 2048 // reserve 2K blocks like initGC does
heapEnd = env.heapEnd
globalsStart = env.globalsStart
globalsEnd = env.globalsEnd
stackTop = env.stackEnd
totalSize := heapEnd - heapStart
metadataSize := (totalSize + blocksPerStateByte*bytesPerBlock) / (1 + blocksPerStateByte*bytesPerBlock)
metadataStart = unsafe.Pointer(heapEnd - metadataSize)
endBlock = (uintptr(metadataStart) - heapStart) / bytesPerBlock
// Clear metadata using memset like initGC does
c.Memset(metadataStart, 0, metadataSize)
// Reset allocator state and all GC statistics for clean test environment
nextAlloc = 0
isGCInit = true
// Reset all GC statistics to start from clean state
gcTotalAlloc = 0
gcTotalBlocks = 0
gcMallocs = 0
gcFrees = 0
gcFreedBlocks = 0
markStackOverflow = false
}
// restoreOriginalGC restores the original GC state
func (env *mockGCEnv) restoreOriginalGC() {
heapStart = env.originalHeapStart
heapEnd = env.originalHeapEnd
globalsStart = env.originalGlobalsStart
globalsEnd = env.originalGlobalsEnd
stackTop = env.originalStackTop
endBlock = env.originalEndBlock
metadataStart = env.originalMetadataStart
nextAlloc = env.originalNextAlloc
isGCInit = false
}
// enableMockMode enables mock root scanning mode
func (env *mockGCEnv) enableMockMode() {
env.mockMode = true
}
// disableMockMode disables mock root scanning mode
func (env *mockGCEnv) disableMockMode() {
env.mockMode = false
}
// addRoot adds an object to the controlled root set
func (env *mockGCEnv) addRoot(ptr unsafe.Pointer) {
env.rootObjects = append(env.rootObjects, ptr)
}
// clearRoots removes all objects from the controlled root set
func (env *mockGCEnv) clearRoots() {
env.rootObjects = env.rootObjects[:0]
}
// mockMarkReachable replaces gcMarkReachable when in mock mode
func (env *mockGCEnv) mockMarkReachable() {
if !env.mockMode {
// Use original logic
markRoots(uintptr(getsp()), stackTop)
markRoots(globalsStart, globalsEnd)
return
}
// Mock mode: only scan our controlled roots
for _, root := range env.rootObjects {
addr := uintptr(root)
markRoot(addr, addr)
}
}
// runMockGC runs standard GC but with controlled root scanning
func (env *mockGCEnv) runMockGC() uintptr {
lock(&gcMutex)
defer unlock(&gcMutex)
lazyInit()
if gcDebug {
println("running mock collection cycle...")
}
// Mark phase: use our mock root scanning
env.mockMarkReachable()
finishMark()
// Resume world (no-op in single threaded)
gcResumeWorld()
// Sweep phase: use standard sweep logic
return sweep()
}
// createTestObjects creates a network of objects for testing reachability
func createTestObjects(env *mockGCEnv) []*testObject {
// Allocate several test objects
objects := make([]*testObject, 0, 10)
// Dependencies Graph
// root1 -> child1 -> grandchild1 -> child2
// root1 -> child2 -> grandchild1
// Create root objects (reachable from stack/globals)
root1 := (*testObject)(Alloc(unsafe.Sizeof(testObject{})))
root2 := (*testObject)(Alloc(unsafe.Sizeof(testObject{})))
objects = append(objects, root1, root2)
// Create objects reachable from root1
child1 := (*testObject)(Alloc(unsafe.Sizeof(testObject{})))
child2 := (*testObject)(Alloc(unsafe.Sizeof(testObject{})))
root1.data[0] = uintptr(unsafe.Pointer(child1))
root1.data[1] = uintptr(unsafe.Pointer(child2))
objects = append(objects, child1, child2)
// Create objects reachable from child1
grandchild1 := (*testObject)(Alloc(unsafe.Sizeof(testObject{})))
child1.data[0] = uintptr(unsafe.Pointer(grandchild1))
objects = append(objects, grandchild1)
// Create circular reference between child2 and grandchild1
child2.data[0] = uintptr(unsafe.Pointer(grandchild1))
grandchild1.data[0] = uintptr(unsafe.Pointer(child2))
// Create unreachable objects (garbage)
garbage1 := (*testObject)(Alloc(unsafe.Sizeof(testObject{})))
garbage2 := (*testObject)(Alloc(unsafe.Sizeof(testObject{})))
// Create circular reference in garbage
garbage1.data[0] = uintptr(unsafe.Pointer(garbage2))
garbage2.data[0] = uintptr(unsafe.Pointer(garbage1))
objects = append(objects, garbage1, garbage2)
return objects
}
func TestMockGCBasicAllocation(t *testing.T) {
env := createMockGCEnv()
env.setupMockGC()
defer env.restoreOriginalGC()
// Test basic allocation
ptr1 := Alloc(32)
if ptr1 == nil {
t.Fatal("Failed to allocate 32 bytes")
}
ptr2 := Alloc(64)
if ptr2 == nil {
t.Fatal("Failed to allocate 64 bytes")
}
// Verify pointers are within heap bounds
addr1 := uintptr(ptr1)
addr2 := uintptr(ptr2)
if addr1 < heapStart || addr1 >= uintptr(metadataStart) {
t.Errorf("ptr1 %x not within heap bounds [%x, %x)", addr1, heapStart, uintptr(metadataStart))
}
if addr2 < heapStart || addr2 >= uintptr(metadataStart) {
t.Errorf("ptr2 %x not within heap bounds [%x, %x)", addr2, heapStart, uintptr(metadataStart))
}
t.Logf("Allocated ptr1 at %x, ptr2 at %x", addr1, addr2)
t.Logf("Heap bounds: [%x, %x)", heapStart, uintptr(metadataStart))
}
func TestMockGCReachabilityAndSweep(t *testing.T) {
env := createMockGCEnv()
env.setupMockGC()
defer env.restoreOriginalGC()
// Track initial stats
initialMallocs := gcMallocs
initialFrees := gcFrees
// Create test object network
objects := createTestObjects(env)
// Add first 2 objects as roots using mock control
env.enableMockMode()
env.addRoot(unsafe.Pointer(objects[0])) // root1
env.addRoot(unsafe.Pointer(objects[1])) // root2
t.Logf("Created %d objects, 2 are roots", len(objects))
t.Logf("Mallocs: %d", gcMallocs-initialMallocs)
// Verify all objects are initially allocated
for i, obj := range objects {
addr := uintptr(unsafe.Pointer(obj))
block := blockFromAddr(addr)
state := gcStateOf(block)
if state != blockStateHead {
t.Errorf("Object %d at %x has state %d, expected %d (HEAD)", i, addr, state, blockStateHead)
}
}
// Perform GC with controlled root scanning
freedBytes := env.runMockGC()
t.Logf("Freed %d bytes during GC", freedBytes)
t.Logf("Frees: %d (delta: %d)", gcFrees, gcFrees-initialFrees)
// Verify reachable objects are still allocated
reachableObjects := []unsafe.Pointer{
unsafe.Pointer(objects[0]), // root1
unsafe.Pointer(objects[1]), // root2
unsafe.Pointer(objects[2]), // child1 (reachable from root1)
unsafe.Pointer(objects[3]), // child2 (reachable from root1)
unsafe.Pointer(objects[4]), // grandchild1 (reachable from child1, child2)
}
for i, obj := range reachableObjects {
addr := uintptr(obj)
block := blockFromAddr(addr)
state := gcStateOf(block)
if state != blockStateHead {
t.Errorf("Reachable object %d at %x has state %d, expected %d (HEAD)", i, addr, state, blockStateHead)
}
}
// Verify unreachable objects are freed
unreachableObjects := []unsafe.Pointer{
unsafe.Pointer(objects[5]), // garbage1
unsafe.Pointer(objects[6]), // garbage2
}
for i, obj := range unreachableObjects {
addr := uintptr(obj)
block := blockFromAddr(addr)
state := gcStateOf(block)
if state != blockStateFree {
t.Errorf("Unreachable object %d at %x has state %d, expected %d (FREE)", i, addr, state, blockStateFree)
}
}
// Verify some memory was actually freed
if freedBytes == 0 {
t.Error("Expected some memory to be freed, but freed 0 bytes")
}
if gcFrees == initialFrees {
t.Error("Expected some objects to be freed, but free count didn't change")
}
// Clear refs to make grandchild1 unreachable
objects[2].data[0] = 0 // child1 -> grandchild1
objects[3].data[0] = 0 // child2 -> grandchild1
// Run GC again with same roots
freedBytes = env.runMockGC()
// child2 should still be reachable (through root1)
blockAddr := blockFromAddr(uintptr(unsafe.Pointer(objects[3])))
state := gcStateOf(blockAddr)
if state != blockStateHead {
t.Errorf("Object child2 at %x has state %d, expected %d (HEAD)", blockAddr, state, blockStateHead)
}
// grandchild1 should now be unreachable and freed
blockAddr = blockFromAddr(uintptr(unsafe.Pointer(objects[4])))
state = gcStateOf(blockAddr)
if state != blockStateFree {
t.Errorf("Object grandchild1 at %x has state %d, expected %d (FREE)", blockAddr, state, blockStateFree)
}
}
func TestMockGCMemoryPressure(t *testing.T) {
env := createMockGCEnv()
env.setupMockGC()
defer env.restoreOriginalGC()
// Calculate available heap space
heapSize := uintptr(metadataStart) - heapStart
blockSize := bytesPerBlock
maxBlocks := heapSize / blockSize
t.Logf("Heap size: %d bytes, Block size: %d bytes, Max blocks: %d",
heapSize, blockSize, maxBlocks)
// Allocate until we trigger GC
var allocations []unsafe.Pointer
allocSize := uintptr(32) // Small allocations
// Allocate about 80% of heap to trigger GC pressure
targetAllocations := int(maxBlocks * 4 / 5) // 80% capacity
for i := 0; i < targetAllocations; i++ {
ptr := Alloc(allocSize)
if ptr == nil {
t.Fatalf("Failed to allocate at iteration %d", i)
}
allocations = append(allocations, ptr)
}
initialMallocs := gcMallocs
t.Logf("Allocated %d objects (%d mallocs total)", len(allocations), initialMallocs)
// Enable mock mode and keep only half the allocations as roots
env.enableMockMode()
keepCount := len(allocations) / 2
for i := 0; i < keepCount; i++ {
env.addRoot(allocations[i])
}
t.Logf("Keeping %d objects as roots, %d should be freed", keepCount, len(allocations)-keepCount)
// Force GC with controlled roots
freeBytes := env.runMockGC()
t.Logf("GC freed %d bytes", freeBytes)
t.Logf("Objects freed: %d", gcFrees)
// Try to allocate more after GC
for i := 0; i < 10; i++ {
ptr := Alloc(allocSize)
if ptr == nil {
t.Fatalf("Failed to allocate after GC at iteration %d", i)
}
}
t.Log("Successfully allocated more objects after GC")
}
func TestMockGCStats(t *testing.T) {
env := createMockGCEnv()
env.setupMockGC()
defer env.restoreOriginalGC()
// Get initial stats
initialStats := ReadGCStats()
t.Logf("Initial stats - Mallocs: %d, Frees: %d, TotalAlloc: %d, Alloc: %d",
initialStats.Mallocs, initialStats.Frees, initialStats.TotalAlloc, initialStats.Alloc)
// Verify basic system stats
expectedSys := uint64(env.heapEnd - env.heapStart - 2048)
if initialStats.Sys != expectedSys {
t.Errorf("Expected Sys %d, got %d", expectedSys, initialStats.Sys)
}
expectedGCSys := uint64(env.heapEnd - uintptr(metadataStart))
if initialStats.GCSys != expectedGCSys {
t.Errorf("Expected GCSys %d, got %d", expectedGCSys, initialStats.GCSys)
}
// Allocate some objects
var allocations []unsafe.Pointer
allocSize := uintptr(64)
numAllocs := 10
for i := 0; i < numAllocs; i++ {
ptr := Alloc(allocSize)
if ptr == nil {
t.Fatalf("Failed to allocate at iteration %d", i)
}
allocations = append(allocations, ptr)
}
// Check stats after allocation
afterAllocStats := ReadGCStats()
t.Logf("After allocation - Mallocs: %d, Frees: %d, TotalAlloc: %d, Alloc: %d",
afterAllocStats.Mallocs, afterAllocStats.Frees, afterAllocStats.TotalAlloc, afterAllocStats.Alloc)
// Verify allocation stats increased
if afterAllocStats.Mallocs <= initialStats.Mallocs {
t.Errorf("Expected Mallocs to increase from %d, got %d", initialStats.Mallocs, afterAllocStats.Mallocs)
}
if afterAllocStats.TotalAlloc <= initialStats.TotalAlloc {
t.Errorf("Expected TotalAlloc to increase from %d, got %d", initialStats.TotalAlloc, afterAllocStats.TotalAlloc)
}
if afterAllocStats.Alloc <= initialStats.Alloc {
t.Errorf("Expected Alloc to increase from %d, got %d", initialStats.Alloc, afterAllocStats.Alloc)
}
// Verify Alloc and HeapAlloc are the same
if afterAllocStats.Alloc != afterAllocStats.HeapAlloc {
t.Errorf("Expected Alloc (%d) to equal HeapAlloc (%d)", afterAllocStats.Alloc, afterAllocStats.HeapAlloc)
}
// Perform GC with controlled roots - keep only half the allocations
env.enableMockMode()
keepCount := len(allocations) / 2
for i := 0; i < keepCount; i++ {
env.addRoot(allocations[i])
}
freedBytes := env.runMockGC()
t.Logf("GC freed %d bytes", freedBytes)
// Check stats after GC
afterGCStats := ReadGCStats()
t.Logf("After GC - Mallocs: %d, Frees: %d, TotalAlloc: %d, Alloc: %d",
afterGCStats.Mallocs, afterGCStats.Frees, afterGCStats.TotalAlloc, afterGCStats.Alloc)
// Verify GC stats
if afterGCStats.Frees <= afterAllocStats.Frees {
t.Errorf("Expected Frees to increase from %d, got %d", afterAllocStats.Frees, afterGCStats.Frees)
}
// TotalAlloc should not decrease (cumulative)
if afterGCStats.TotalAlloc != afterAllocStats.TotalAlloc {
t.Errorf("Expected TotalAlloc to remain %d after GC, got %d", afterAllocStats.TotalAlloc, afterGCStats.TotalAlloc)
}
// Alloc should decrease (freed objects)
if afterGCStats.Alloc >= afterAllocStats.Alloc {
t.Errorf("Expected Alloc to decrease from %d after GC, got %d", afterAllocStats.Alloc, afterGCStats.Alloc)
}
// Verify heap statistics consistency
if afterGCStats.HeapSys != afterGCStats.HeapInuse+afterGCStats.HeapIdle {
t.Errorf("Expected HeapSys (%d) to equal HeapInuse (%d) + HeapIdle (%d)",
afterGCStats.HeapSys, afterGCStats.HeapInuse, afterGCStats.HeapIdle)
}
// Verify live objects calculation
expectedLiveObjects := afterGCStats.Mallocs - afterGCStats.Frees
t.Logf("Live objects: %d (Mallocs: %d - Frees: %d)", expectedLiveObjects, afterGCStats.Mallocs, afterGCStats.Frees)
// The number of live objects should be reasonable (we kept half the allocations plus some overhead)
if expectedLiveObjects < uint64(keepCount) {
t.Errorf("Expected at least %d live objects, got %d", keepCount, expectedLiveObjects)
}
// Test stack statistics
if afterGCStats.StackInuse > afterGCStats.StackSys {
t.Errorf("StackInuse (%d) should not exceed StackSys (%d)", afterGCStats.StackInuse, afterGCStats.StackSys)
}
}
func TestMockGCCircularReferences(t *testing.T) {
env := createMockGCEnv()
env.setupMockGC()
defer env.restoreOriginalGC()
type Node struct {
data [3]uintptr
next uintptr
}
// Create a circular linked list
nodes := make([]*Node, 5)
for i := range nodes {
nodes[i] = (*Node)(Alloc(unsafe.Sizeof(Node{})))
nodes[i].data[0] = uintptr(i) // Store index as data
}
// Link them in a circle
for i := range nodes {
nextIdx := (i + 1) % len(nodes)
nodes[i].next = uintptr(unsafe.Pointer(nodes[nextIdx]))
}
t.Logf("Created circular list of %d nodes", len(nodes))
// Initially all should be allocated
for i, node := range nodes {
addr := uintptr(unsafe.Pointer(node))
block := blockFromAddr(addr)
state := gcStateOf(block)
if state != blockStateHead {
t.Errorf("Node %d at %x has state %d, expected %d", i, addr, state, blockStateHead)
}
}
// Test 1: With root references - objects should NOT be freed
env.enableMockMode()
// Add the first node as root (keeps entire circle reachable)
env.addRoot(unsafe.Pointer(nodes[0]))
freeBytes := env.runMockGC()
t.Logf("GC with root reference freed %d bytes", freeBytes)
// All nodes should still be allocated since they're reachable through the root
for i, node := range nodes {
addr := uintptr(unsafe.Pointer(node))
block := blockFromAddr(addr)
state := gcStateOf(block)
if state != blockStateHead {
t.Errorf("Node %d at %x should still be allocated, but has state %d", i, addr, state)
}
}
// Test 2: Without root references - all circular objects should be freed
env.clearRoots() // Remove all root references
freeBytes = env.runMockGC()
t.Logf("GC without roots freed %d bytes", freeBytes)
// All nodes should now be freed since they're not reachable from any roots
expectedFreed := uintptr(len(nodes)) * ((unsafe.Sizeof(Node{}) + bytesPerBlock - 1) / bytesPerBlock) * bytesPerBlock
if freeBytes < expectedFreed {
t.Errorf("Expected at least %d bytes freed, got %d", expectedFreed, freeBytes)
}
// Verify all nodes are actually freed
for i, node := range nodes {
addr := uintptr(unsafe.Pointer(node))
block := blockFromAddr(addr)
state := gcStateOf(block)
if state != blockStateFree {
t.Errorf("Node %d at %x should be freed, but has state %d", i, addr, state)
}
}
// Verify we can allocate new objects in the freed space
newPtr := Alloc(unsafe.Sizeof(Node{}))
if newPtr == nil {
t.Error("Failed to allocate after freeing circular references")
}
}

View File

@@ -1,4 +1,4 @@
//go:build !nogc
//go:build !nogc && !baremetal
/*
* Copyright (c) 2025 The GoPlus Authors (goplus.org). All rights reserved.

View File

@@ -1,4 +1,4 @@
//go:build nogc
//go:build nogc || baremetal
/*
* Copyright (c) 2025 The GoPlus Authors (goplus.org). All rights reserved.

View File

@@ -1,5 +1,4 @@
//go:build !nogc
// +build !nogc
//go:build !nogc && !baremetal
/*
* Copyright (c) 2024 The GoPlus Authors (goplus.org). All rights reserved.

View File

@@ -0,0 +1,35 @@
//go:build !nogc && baremetal
/*
* Copyright (c) 2024 The GoPlus Authors (goplus.org). All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package runtime
import (
"unsafe"
"github.com/goplus/llgo/runtime/internal/runtime/tinygogc"
)
// AllocU allocates uninitialized memory.
func AllocU(size uintptr) unsafe.Pointer {
return tinygogc.Alloc(size)
}
// AllocZ allocates zero-initialized memory.
func AllocZ(size uintptr) unsafe.Pointer {
return tinygogc.Alloc(size)
}

View File

@@ -1,6 +1,4 @@
__stack = ORIGIN(dram_seg) + LENGTH(dram_seg);
__MIN_STACK_SIZE = 0x1000;
_stack_top = __stack;
_heapEnd = ORIGIN(dram_seg) + LENGTH(dram_seg);
/* Default entry point */
ENTRY(_start)
@@ -94,6 +92,12 @@ SECTIONS
_iram_end = .;
} > iram_seg
.stack (NOLOAD) :
{
. += 16K;
__stack = .;
} > dram_seg
/**
* This section is required to skip .iram0.text area because iram0_0_seg and
* dram0_0_seg reflect the same address space on different buses.
@@ -148,7 +152,7 @@ SECTIONS
} > dram_seg
/* Check if data + heap + stack exceeds RAM limit */
ASSERT(_end <= __stack - __MIN_STACK_SIZE, "region DRAM overflowed by .data and .bss sections")
ASSERT(_end <= _heapEnd, "region DRAM overflowed by .data and .bss sections")
/* Stabs debugging sections. */
.stab 0 : { *(.stab) }
@@ -193,3 +197,8 @@ SECTIONS
.gnu.attributes 0 : { KEEP (*(.gnu.attributes)) }
/DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) }
}
_globals_start = _data_start;
_globals_end = _end;
_heapStart = _end;
_stack_top = __stack;

View File

@@ -1,5 +1,4 @@
__stack = ORIGIN(dram_seg) + LENGTH(dram_seg);
__MIN_STACK_SIZE = 0x2000;
_heapEnd = ORIGIN(dram_seg) + LENGTH(dram_seg);
ENTRY(_start)
SECTIONS
@@ -26,6 +25,14 @@ SECTIONS
the same address within the page on the next page up. */
. = ALIGN (CONSTANT (MAXPAGESIZE)) - ((CONSTANT (MAXPAGESIZE) - .) & (CONSTANT (MAXPAGESIZE) - 1)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE));
.stack (NOLOAD) :
{
_stack_end = .;
. = ALIGN(16);
. += 16K;
__stack = .;
}
.rodata :
{
@@ -116,7 +123,7 @@ SECTIONS
. = DATA_SEGMENT_END (.);
/* Check if data + heap + stack exceeds RAM limit */
ASSERT(. <= __stack - __MIN_STACK_SIZE, "region DRAM overflowed by .data and .bss sections")
ASSERT(. <= _heapEnd, "region DRAM overflowed by .data and .bss sections")
/* Stabs debugging sections. */
.stab 0 : { *(.stab) }
@@ -165,4 +172,7 @@ SECTIONS
_sbss = __bss_start;
_ebss = _end;
_globals_start = _data_start;
_globals_end = _end;
_heapStart = _end;
_stack_top = __stack;

View File

@@ -19,8 +19,8 @@ MEMORY
/* 64k at the end of DRAM, after ROM bootloader stack
* or entire DRAM (for QEMU only)
*/
dram_seg (RW) : org = 0x3FFF0000 ,
len = 0x10000
dram_seg (RW) : org = 0x3ffae000 ,
len = 0x52000
}
INCLUDE "targets/esp32.app.elf.ld";