Initial commit: Go 1.23 release state

This commit is contained in:
Vorapol Rinsatitnon
2024-09-21 23:49:08 +10:00
commit 17cd57a668
13231 changed files with 3114330 additions and 0 deletions

332
src/runtime/HACKING.md Normal file
View File

@@ -0,0 +1,332 @@
This is a living document and at times it will be out of date. It is
intended to articulate how programming in the Go runtime differs from
writing normal Go. It focuses on pervasive concepts rather than
details of particular interfaces.
Scheduler structures
====================
The scheduler manages three types of resources that pervade the
runtime: Gs, Ms, and Ps. It's important to understand these even if
you're not working on the scheduler.
Gs, Ms, Ps
----------
A "G" is simply a goroutine. It's represented by type `g`. When a
goroutine exits, its `g` object is returned to a pool of free `g`s and
can later be reused for some other goroutine.
An "M" is an OS thread that can be executing user Go code, runtime
code, a system call, or be idle. It's represented by type `m`. There
can be any number of Ms at a time since any number of threads may be
blocked in system calls.
Finally, a "P" represents the resources required to execute user Go
code, such as scheduler and memory allocator state. It's represented
by type `p`. There are exactly `GOMAXPROCS` Ps. A P can be thought of
like a CPU in the OS scheduler and the contents of the `p` type like
per-CPU state. This is a good place to put state that needs to be
sharded for efficiency, but doesn't need to be per-thread or
per-goroutine.
The scheduler's job is to match up a G (the code to execute), an M
(where to execute it), and a P (the rights and resources to execute
it). When an M stops executing user Go code, for example by entering a
system call, it returns its P to the idle P pool. In order to resume
executing user Go code, for example on return from a system call, it
must acquire a P from the idle pool.
All `g`, `m`, and `p` objects are heap allocated, but are never freed,
so their memory remains type stable. As a result, the runtime can
avoid write barriers in the depths of the scheduler.
`getg()` and `getg().m.curg`
----------------------------
To get the current user `g`, use `getg().m.curg`.
`getg()` alone returns the current `g`, but when executing on the
system or signal stacks, this will return the current M's "g0" or
"gsignal", respectively. This is usually not what you want.
To determine if you're running on the user stack or the system stack,
use `getg() == getg().m.curg`.
Stacks
======
Every non-dead G has a *user stack* associated with it, which is what
user Go code executes on. User stacks start small (e.g., 2K) and grow
or shrink dynamically.
Every M has a *system stack* associated with it (also known as the M's
"g0" stack because it's implemented as a stub G) and, on Unix
platforms, a *signal stack* (also known as the M's "gsignal" stack).
System and signal stacks cannot grow, but are large enough to execute
runtime and cgo code (8K in a pure Go binary; system-allocated in a
cgo binary).
Runtime code often temporarily switches to the system stack using
`systemstack`, `mcall`, or `asmcgocall` to perform tasks that must not
be preempted, that must not grow the user stack, or that switch user
goroutines. Code running on the system stack is implicitly
non-preemptible and the garbage collector does not scan system stacks.
While running on the system stack, the current user stack is not used
for execution.
nosplit functions
-----------------
Most functions start with a prologue that inspects the stack pointer
and the current G's stack bound and calls `morestack` if the stack
needs to grow.
Functions can be marked `//go:nosplit` (or `NOSPLIT` in assembly) to
indicate that they should not get this prologue. This has several
uses:
- Functions that must run on the user stack, but must not call into
stack growth, for example because this would cause a deadlock, or
because they have untyped words on the stack.
- Functions that must not be preempted on entry.
- Functions that may run without a valid G. For example, functions
that run in early runtime start-up, or that may be entered from C
code such as cgo callbacks or the signal handler.
Splittable functions ensure there's some amount of space on the stack
for nosplit functions to run in and the linker checks that any static
chain of nosplit function calls cannot exceed this bound.
Any function with a `//go:nosplit` annotation should explain why it is
nosplit in its documentation comment.
Error handling and reporting
============================
Errors that can reasonably be recovered from in user code should use
`panic` like usual. However, there are some situations where `panic`
will cause an immediate fatal error, such as when called on the system
stack or when called during `mallocgc`.
Most errors in the runtime are not recoverable. For these, use
`throw`, which dumps the traceback and immediately terminates the
process. In general, `throw` should be passed a string constant to
avoid allocating in perilous situations. By convention, additional
details are printed before `throw` using `print` or `println` and the
messages are prefixed with "runtime:".
For unrecoverable errors where user code is expected to be at fault for the
failure (such as racing map writes), use `fatal`.
For runtime error debugging, it may be useful to run with `GOTRACEBACK=system`
or `GOTRACEBACK=crash`. The output of `panic` and `fatal` is as described by
`GOTRACEBACK`. The output of `throw` always includes runtime frames, metadata
and all goroutines regardless of `GOTRACEBACK` (i.e., equivalent to
`GOTRACEBACK=system`). Whether `throw` crashes or not is still controlled by
`GOTRACEBACK`.
Synchronization
===============
The runtime has multiple synchronization mechanisms. They differ in
semantics and, in particular, in whether they interact with the
goroutine scheduler or the OS scheduler.
The simplest is `mutex`, which is manipulated using `lock` and
`unlock`. This should be used to protect shared structures for short
periods. Blocking on a `mutex` directly blocks the M, without
interacting with the Go scheduler. This means it is safe to use from
the lowest levels of the runtime, but also prevents any associated G
and P from being rescheduled. `rwmutex` is similar.
For one-shot notifications, use `note`, which provides `notesleep` and
`notewakeup`. Unlike traditional UNIX `sleep`/`wakeup`, `note`s are
race-free, so `notesleep` returns immediately if the `notewakeup` has
already happened. A `note` can be reset after use with `noteclear`,
which must not race with a sleep or wakeup. Like `mutex`, blocking on
a `note` blocks the M. However, there are different ways to sleep on a
`note`:`notesleep` also prevents rescheduling of any associated G and
P, while `notetsleepg` acts like a blocking system call that allows
the P to be reused to run another G. This is still less efficient than
blocking the G directly since it consumes an M.
To interact directly with the goroutine scheduler, use `gopark` and
`goready`. `gopark` parks the current goroutine—putting it in the
"waiting" state and removing it from the scheduler's run queue—and
schedules another goroutine on the current M/P. `goready` puts a
parked goroutine back in the "runnable" state and adds it to the run
queue.
In summary,
<table>
<tr><th></th><th colspan="3">Blocks</th></tr>
<tr><th>Interface</th><th>G</th><th>M</th><th>P</th></tr>
<tr><td>(rw)mutex</td><td>Y</td><td>Y</td><td>Y</td></tr>
<tr><td>note</td><td>Y</td><td>Y</td><td>Y/N</td></tr>
<tr><td>park</td><td>Y</td><td>N</td><td>N</td></tr>
</table>
Atomics
=======
The runtime uses its own atomics package at `internal/runtime/atomic`.
This corresponds to `sync/atomic`, but functions have different names
for historical reasons and there are a few additional functions needed
by the runtime.
In general, we think hard about the uses of atomics in the runtime and
try to avoid unnecessary atomic operations. If access to a variable is
sometimes protected by another synchronization mechanism, the
already-protected accesses generally don't need to be atomic. There
are several reasons for this:
1. Using non-atomic or atomic access where appropriate makes the code
more self-documenting. Atomic access to a variable implies there's
somewhere else that may concurrently access the variable.
2. Non-atomic access allows for automatic race detection. The runtime
doesn't currently have a race detector, but it may in the future.
Atomic access defeats the race detector, while non-atomic access
allows the race detector to check your assumptions.
3. Non-atomic access may improve performance.
Of course, any non-atomic access to a shared variable should be
documented to explain how that access is protected.
Some common patterns that mix atomic and non-atomic access are:
* Read-mostly variables where updates are protected by a lock. Within
the locked region, reads do not need to be atomic, but the write
does. Outside the locked region, reads need to be atomic.
* Reads that only happen during STW, where no writes can happen during
STW, do not need to be atomic.
That said, the advice from the Go memory model stands: "Don't be
[too] clever." The performance of the runtime matters, but its
robustness matters more.
Unmanaged memory
================
In general, the runtime tries to use regular heap allocation. However,
in some cases the runtime must allocate objects outside of the garbage
collected heap, in *unmanaged memory*. This is necessary if the
objects are part of the memory manager itself or if they must be
allocated in situations where the caller may not have a P.
There are three mechanisms for allocating unmanaged memory:
* sysAlloc obtains memory directly from the OS. This comes in whole
multiples of the system page size, but it can be freed with sysFree.
* persistentalloc combines multiple smaller allocations into a single
sysAlloc to avoid fragmentation. However, there is no way to free
persistentalloced objects (hence the name).
* fixalloc is a SLAB-style allocator that allocates objects of a fixed
size. fixalloced objects can be freed, but this memory can only be
reused by the same fixalloc pool, so it can only be reused for
objects of the same type.
In general, types that are allocated using any of these should be
marked as not in heap by embedding `runtime/internal/sys.NotInHeap`.
Objects that are allocated in unmanaged memory **must not** contain
heap pointers unless the following rules are also obeyed:
1. Any pointers from unmanaged memory to the heap must be garbage
collection roots. More specifically, any pointer must either be
accessible through a global variable or be added as an explicit
garbage collection root in `runtime.markroot`.
2. If the memory is reused, the heap pointers must be zero-initialized
before they become visible as GC roots. Otherwise, the GC may
observe stale heap pointers. See "Zero-initialization versus
zeroing".
Zero-initialization versus zeroing
==================================
There are two types of zeroing in the runtime, depending on whether
the memory is already initialized to a type-safe state.
If memory is not in a type-safe state, meaning it potentially contains
"garbage" because it was just allocated and it is being initialized
for first use, then it must be *zero-initialized* using
`memclrNoHeapPointers` or non-pointer writes. This does not perform
write barriers.
If memory is already in a type-safe state and is simply being set to
the zero value, this must be done using regular writes, `typedmemclr`,
or `memclrHasPointers`. This performs write barriers.
Runtime-only compiler directives
================================
In addition to the "//go:" directives documented in "go doc compile",
the compiler supports additional directives only in the runtime.
go:systemstack
--------------
`go:systemstack` indicates that a function must run on the system
stack. This is checked dynamically by a special function prologue.
go:nowritebarrier
-----------------
`go:nowritebarrier` directs the compiler to emit an error if the
following function contains any write barriers. (It *does not*
suppress the generation of write barriers; it is simply an assertion.)
Usually you want `go:nowritebarrierrec`. `go:nowritebarrier` is
primarily useful in situations where it's "nice" not to have write
barriers, but not required for correctness.
go:nowritebarrierrec and go:yeswritebarrierrec
----------------------------------------------
`go:nowritebarrierrec` directs the compiler to emit an error if the
following function or any function it calls recursively, up to a
`go:yeswritebarrierrec`, contains a write barrier.
Logically, the compiler floods the call graph starting from each
`go:nowritebarrierrec` function and produces an error if it encounters
a function containing a write barrier. This flood stops at
`go:yeswritebarrierrec` functions.
`go:nowritebarrierrec` is used in the implementation of the write
barrier to prevent infinite loops.
Both directives are used in the scheduler. The write barrier requires
an active P (`getg().m.p != nil`) and scheduler code often runs
without an active P. In this case, `go:nowritebarrierrec` is used on
functions that release the P or may run without a P and
`go:yeswritebarrierrec` is used when code re-acquires an active P.
Since these are function-level annotations, code that releases or
acquires a P may need to be split across two functions.
go:uintptrkeepalive
-------------------
The //go:uintptrkeepalive directive must be followed by a function declaration.
It specifies that the function's uintptr arguments may be pointer values that
have been converted to uintptr and must be kept alive for the duration of the
call, even though from the types alone it would appear that the object is no
longer needed during the call.
This directive is similar to //go:uintptrescapes, but it does not force
arguments to escape. Since stack growth does not understand these arguments,
this directive must be used with //go:nosplit (in the marked function and all
transitive calls) to prevent stack growth.
The conversion from pointer to uintptr must appear in the argument list of any
call to this function. This directive is used for some low-level system call
implementations.

5
src/runtime/Makefile Normal file
View File

@@ -0,0 +1,5 @@
# Copyright 2009 The Go Authors. All rights reserved.
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file.
include ../Make.dist

117
src/runtime/abi_test.go Normal file
View File

@@ -0,0 +1,117 @@
// Copyright 2021 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build goexperiment.regabiargs
// This file contains tests specific to making sure the register ABI
// works in a bunch of contexts in the runtime.
package runtime_test
import (
"internal/abi"
"internal/runtime/atomic"
"internal/testenv"
"os"
"os/exec"
"runtime"
"strings"
"testing"
"time"
)
var regConfirmRun atomic.Int32
//go:registerparams
func regFinalizerPointer(v *TintPointer) (int, float32, [10]byte) {
regConfirmRun.Store(int32(*(*int)(v.p)))
return 5151, 4.0, [10]byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
}
//go:registerparams
func regFinalizerIface(v Tinter) (int, float32, [10]byte) {
regConfirmRun.Store(int32(*(*int)(v.(*TintPointer).p)))
return 5151, 4.0, [10]byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
}
// TintPointer has a pointer member to make sure that it isn't allocated by the
// tiny allocator, so we know when its finalizer will run
type TintPointer struct {
p *Tint
}
func (*TintPointer) m() {}
func TestFinalizerRegisterABI(t *testing.T) {
testenv.MustHaveExec(t)
// Actually run the test in a subprocess because we don't want
// finalizers from other tests interfering.
if os.Getenv("TEST_FINALIZER_REGABI") != "1" {
cmd := testenv.CleanCmdEnv(exec.Command(os.Args[0], "-test.run=^TestFinalizerRegisterABI$", "-test.v"))
cmd.Env = append(cmd.Env, "TEST_FINALIZER_REGABI=1")
out, err := cmd.CombinedOutput()
if !strings.Contains(string(out), "PASS\n") || err != nil {
t.Fatalf("%s\n(exit status %v)", string(out), err)
}
return
}
// Optimistically clear any latent finalizers from e.g. the testing
// package before continuing.
//
// It's possible that a finalizer only becomes available to run
// after this point, which would interfere with the test and could
// cause a crash, but because we're running in a separate process
// it's extremely unlikely.
runtime.GC()
runtime.GC()
// fing will only pick the new IntRegArgs up if it's currently
// sleeping and wakes up, so wait for it to go to sleep.
success := false
for i := 0; i < 100; i++ {
if runtime.FinalizerGAsleep() {
success = true
break
}
time.Sleep(20 * time.Millisecond)
}
if !success {
t.Fatal("finalizer not asleep?")
}
argRegsBefore := runtime.SetIntArgRegs(abi.IntArgRegs)
defer runtime.SetIntArgRegs(argRegsBefore)
tests := []struct {
name string
fin any
confirmValue int
}{
{"Pointer", regFinalizerPointer, -1},
{"Interface", regFinalizerIface, -2},
}
for i := range tests {
test := &tests[i]
t.Run(test.name, func(t *testing.T) {
x := &TintPointer{p: new(Tint)}
*x.p = (Tint)(test.confirmValue)
runtime.SetFinalizer(x, test.fin)
runtime.KeepAlive(x)
// Queue the finalizer.
runtime.GC()
runtime.GC()
if !runtime.BlockUntilEmptyFinalizerQueue(int64(time.Second)) {
t.Fatal("finalizer failed to execute")
}
if got := int(regConfirmRun.Load()); got != test.confirmValue {
t.Fatalf("wrong finalizer executed? got %d, want %d", got, test.confirmValue)
}
})
}
}

511
src/runtime/alg.go Normal file
View File

@@ -0,0 +1,511 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package runtime
import (
"internal/abi"
"internal/cpu"
"internal/goarch"
"unsafe"
)
const (
c0 = uintptr((8-goarch.PtrSize)/4*2860486313 + (goarch.PtrSize-4)/4*33054211828000289)
c1 = uintptr((8-goarch.PtrSize)/4*3267000013 + (goarch.PtrSize-4)/4*23344194077549503)
)
func memhash0(p unsafe.Pointer, h uintptr) uintptr {
return h
}
func memhash8(p unsafe.Pointer, h uintptr) uintptr {
return memhash(p, h, 1)
}
func memhash16(p unsafe.Pointer, h uintptr) uintptr {
return memhash(p, h, 2)
}
func memhash128(p unsafe.Pointer, h uintptr) uintptr {
return memhash(p, h, 16)
}
//go:nosplit
func memhash_varlen(p unsafe.Pointer, h uintptr) uintptr {
ptr := getclosureptr()
size := *(*uintptr)(unsafe.Pointer(ptr + unsafe.Sizeof(h)))
return memhash(p, h, size)
}
// runtime variable to check if the processor we're running on
// actually supports the instructions used by the AES-based
// hash implementation.
var useAeshash bool
// in asm_*.s
// memhash should be an internal detail,
// but widely used packages access it using linkname.
// Notable members of the hall of shame include:
// - github.com/aacfactory/fns
// - github.com/dgraph-io/ristretto
// - github.com/minio/simdjson-go
// - github.com/nbd-wtf/go-nostr
// - github.com/outcaste-io/ristretto
// - github.com/puzpuzpuz/xsync/v2
// - github.com/puzpuzpuz/xsync/v3
// - github.com/segmentio/parquet-go
// - github.com/parquet-go/parquet-go
// - github.com/authzed/spicedb
// - github.com/pingcap/badger
//
// Do not remove or change the type signature.
// See go.dev/issue/67401.
//
//go:linkname memhash
func memhash(p unsafe.Pointer, h, s uintptr) uintptr
// memhash32 should be an internal detail,
// but widely used packages access it using linkname.
// Notable members of the hall of shame include:
// - github.com/segmentio/parquet-go
// - github.com/parquet-go/parquet-go
//
// Do not remove or change the type signature.
// See go.dev/issue/67401.
//
//go:linkname memhash32
func memhash32(p unsafe.Pointer, h uintptr) uintptr
// memhash64 should be an internal detail,
// but widely used packages access it using linkname.
// Notable members of the hall of shame include:
// - github.com/segmentio/parquet-go
// - github.com/parquet-go/parquet-go
//
// Do not remove or change the type signature.
// See go.dev/issue/67401.
//
//go:linkname memhash64
func memhash64(p unsafe.Pointer, h uintptr) uintptr
// strhash should be an internal detail,
// but widely used packages access it using linkname.
// Notable members of the hall of shame include:
// - github.com/aristanetworks/goarista
// - github.com/bytedance/sonic
// - github.com/bytedance/go-tagexpr/v2
// - github.com/cloudwego/frugal
// - github.com/cloudwego/dynamicgo
// - github.com/v2fly/v2ray-core/v5
//
// Do not remove or change the type signature.
// See go.dev/issue/67401.
//
//go:linkname strhash
func strhash(p unsafe.Pointer, h uintptr) uintptr
func strhashFallback(a unsafe.Pointer, h uintptr) uintptr {
x := (*stringStruct)(a)
return memhashFallback(x.str, h, uintptr(x.len))
}
// NOTE: Because NaN != NaN, a map can contain any
// number of (mostly useless) entries keyed with NaNs.
// To avoid long hash chains, we assign a random number
// as the hash value for a NaN.
func f32hash(p unsafe.Pointer, h uintptr) uintptr {
f := *(*float32)(p)
switch {
case f == 0:
return c1 * (c0 ^ h) // +0, -0
case f != f:
return c1 * (c0 ^ h ^ uintptr(rand())) // any kind of NaN
default:
return memhash(p, h, 4)
}
}
func f64hash(p unsafe.Pointer, h uintptr) uintptr {
f := *(*float64)(p)
switch {
case f == 0:
return c1 * (c0 ^ h) // +0, -0
case f != f:
return c1 * (c0 ^ h ^ uintptr(rand())) // any kind of NaN
default:
return memhash(p, h, 8)
}
}
func c64hash(p unsafe.Pointer, h uintptr) uintptr {
x := (*[2]float32)(p)
return f32hash(unsafe.Pointer(&x[1]), f32hash(unsafe.Pointer(&x[0]), h))
}
func c128hash(p unsafe.Pointer, h uintptr) uintptr {
x := (*[2]float64)(p)
return f64hash(unsafe.Pointer(&x[1]), f64hash(unsafe.Pointer(&x[0]), h))
}
func interhash(p unsafe.Pointer, h uintptr) uintptr {
a := (*iface)(p)
tab := a.tab
if tab == nil {
return h
}
t := tab.Type
if t.Equal == nil {
// Check hashability here. We could do this check inside
// typehash, but we want to report the topmost type in
// the error text (e.g. in a struct with a field of slice type
// we want to report the struct, not the slice).
panic(errorString("hash of unhashable type " + toRType(t).string()))
}
if isDirectIface(t) {
return c1 * typehash(t, unsafe.Pointer(&a.data), h^c0)
} else {
return c1 * typehash(t, a.data, h^c0)
}
}
// nilinterhash should be an internal detail,
// but widely used packages access it using linkname.
// Notable members of the hall of shame include:
// - github.com/anacrolix/stm
// - github.com/aristanetworks/goarista
//
// Do not remove or change the type signature.
// See go.dev/issue/67401.
//
//go:linkname nilinterhash
func nilinterhash(p unsafe.Pointer, h uintptr) uintptr {
a := (*eface)(p)
t := a._type
if t == nil {
return h
}
if t.Equal == nil {
// See comment in interhash above.
panic(errorString("hash of unhashable type " + toRType(t).string()))
}
if isDirectIface(t) {
return c1 * typehash(t, unsafe.Pointer(&a.data), h^c0)
} else {
return c1 * typehash(t, a.data, h^c0)
}
}
// typehash computes the hash of the object of type t at address p.
// h is the seed.
// This function is seldom used. Most maps use for hashing either
// fixed functions (e.g. f32hash) or compiler-generated functions
// (e.g. for a type like struct { x, y string }). This implementation
// is slower but more general and is used for hashing interface types
// (called from interhash or nilinterhash, above) or for hashing in
// maps generated by reflect.MapOf (reflect_typehash, below).
// Note: this function must match the compiler generated
// functions exactly. See issue 37716.
//
// typehash should be an internal detail,
// but widely used packages access it using linkname.
// Notable members of the hall of shame include:
// - github.com/puzpuzpuz/xsync/v2
// - github.com/puzpuzpuz/xsync/v3
//
// Do not remove or change the type signature.
// See go.dev/issue/67401.
//
//go:linkname typehash
func typehash(t *_type, p unsafe.Pointer, h uintptr) uintptr {
if t.TFlag&abi.TFlagRegularMemory != 0 {
// Handle ptr sizes specially, see issue 37086.
switch t.Size_ {
case 4:
return memhash32(p, h)
case 8:
return memhash64(p, h)
default:
return memhash(p, h, t.Size_)
}
}
switch t.Kind_ & abi.KindMask {
case abi.Float32:
return f32hash(p, h)
case abi.Float64:
return f64hash(p, h)
case abi.Complex64:
return c64hash(p, h)
case abi.Complex128:
return c128hash(p, h)
case abi.String:
return strhash(p, h)
case abi.Interface:
i := (*interfacetype)(unsafe.Pointer(t))
if len(i.Methods) == 0 {
return nilinterhash(p, h)
}
return interhash(p, h)
case abi.Array:
a := (*arraytype)(unsafe.Pointer(t))
for i := uintptr(0); i < a.Len; i++ {
h = typehash(a.Elem, add(p, i*a.Elem.Size_), h)
}
return h
case abi.Struct:
s := (*structtype)(unsafe.Pointer(t))
for _, f := range s.Fields {
if f.Name.IsBlank() {
continue
}
h = typehash(f.Typ, add(p, f.Offset), h)
}
return h
default:
// Should never happen, as typehash should only be called
// with comparable types.
panic(errorString("hash of unhashable type " + toRType(t).string()))
}
}
func mapKeyError(t *maptype, p unsafe.Pointer) error {
if !t.HashMightPanic() {
return nil
}
return mapKeyError2(t.Key, p)
}
func mapKeyError2(t *_type, p unsafe.Pointer) error {
if t.TFlag&abi.TFlagRegularMemory != 0 {
return nil
}
switch t.Kind_ & abi.KindMask {
case abi.Float32, abi.Float64, abi.Complex64, abi.Complex128, abi.String:
return nil
case abi.Interface:
i := (*interfacetype)(unsafe.Pointer(t))
var t *_type
var pdata *unsafe.Pointer
if len(i.Methods) == 0 {
a := (*eface)(p)
t = a._type
if t == nil {
return nil
}
pdata = &a.data
} else {
a := (*iface)(p)
if a.tab == nil {
return nil
}
t = a.tab.Type
pdata = &a.data
}
if t.Equal == nil {
return errorString("hash of unhashable type " + toRType(t).string())
}
if isDirectIface(t) {
return mapKeyError2(t, unsafe.Pointer(pdata))
} else {
return mapKeyError2(t, *pdata)
}
case abi.Array:
a := (*arraytype)(unsafe.Pointer(t))
for i := uintptr(0); i < a.Len; i++ {
if err := mapKeyError2(a.Elem, add(p, i*a.Elem.Size_)); err != nil {
return err
}
}
return nil
case abi.Struct:
s := (*structtype)(unsafe.Pointer(t))
for _, f := range s.Fields {
if f.Name.IsBlank() {
continue
}
if err := mapKeyError2(f.Typ, add(p, f.Offset)); err != nil {
return err
}
}
return nil
default:
// Should never happen, keep this case for robustness.
return errorString("hash of unhashable type " + toRType(t).string())
}
}
//go:linkname reflect_typehash reflect.typehash
func reflect_typehash(t *_type, p unsafe.Pointer, h uintptr) uintptr {
return typehash(t, p, h)
}
func memequal0(p, q unsafe.Pointer) bool {
return true
}
func memequal8(p, q unsafe.Pointer) bool {
return *(*int8)(p) == *(*int8)(q)
}
func memequal16(p, q unsafe.Pointer) bool {
return *(*int16)(p) == *(*int16)(q)
}
func memequal32(p, q unsafe.Pointer) bool {
return *(*int32)(p) == *(*int32)(q)
}
func memequal64(p, q unsafe.Pointer) bool {
return *(*int64)(p) == *(*int64)(q)
}
func memequal128(p, q unsafe.Pointer) bool {
return *(*[2]int64)(p) == *(*[2]int64)(q)
}
func f32equal(p, q unsafe.Pointer) bool {
return *(*float32)(p) == *(*float32)(q)
}
func f64equal(p, q unsafe.Pointer) bool {
return *(*float64)(p) == *(*float64)(q)
}
func c64equal(p, q unsafe.Pointer) bool {
return *(*complex64)(p) == *(*complex64)(q)
}
func c128equal(p, q unsafe.Pointer) bool {
return *(*complex128)(p) == *(*complex128)(q)
}
func strequal(p, q unsafe.Pointer) bool {
return *(*string)(p) == *(*string)(q)
}
func interequal(p, q unsafe.Pointer) bool {
x := *(*iface)(p)
y := *(*iface)(q)
return x.tab == y.tab && ifaceeq(x.tab, x.data, y.data)
}
func nilinterequal(p, q unsafe.Pointer) bool {
x := *(*eface)(p)
y := *(*eface)(q)
return x._type == y._type && efaceeq(x._type, x.data, y.data)
}
func efaceeq(t *_type, x, y unsafe.Pointer) bool {
if t == nil {
return true
}
eq := t.Equal
if eq == nil {
panic(errorString("comparing uncomparable type " + toRType(t).string()))
}
if isDirectIface(t) {
// Direct interface types are ptr, chan, map, func, and single-element structs/arrays thereof.
// Maps and funcs are not comparable, so they can't reach here.
// Ptrs, chans, and single-element items can be compared directly using ==.
return x == y
}
return eq(x, y)
}
func ifaceeq(tab *itab, x, y unsafe.Pointer) bool {
if tab == nil {
return true
}
t := tab.Type
eq := t.Equal
if eq == nil {
panic(errorString("comparing uncomparable type " + toRType(t).string()))
}
if isDirectIface(t) {
// See comment in efaceeq.
return x == y
}
return eq(x, y)
}
// Testing adapters for hash quality tests (see hash_test.go)
//
// stringHash should be an internal detail,
// but widely used packages access it using linkname.
// Notable members of the hall of shame include:
// - github.com/k14s/starlark-go
//
// Do not remove or change the type signature.
// See go.dev/issue/67401.
//
//go:linkname stringHash
func stringHash(s string, seed uintptr) uintptr {
return strhash(noescape(unsafe.Pointer(&s)), seed)
}
func bytesHash(b []byte, seed uintptr) uintptr {
s := (*slice)(unsafe.Pointer(&b))
return memhash(s.array, seed, uintptr(s.len))
}
func int32Hash(i uint32, seed uintptr) uintptr {
return memhash32(noescape(unsafe.Pointer(&i)), seed)
}
func int64Hash(i uint64, seed uintptr) uintptr {
return memhash64(noescape(unsafe.Pointer(&i)), seed)
}
func efaceHash(i any, seed uintptr) uintptr {
return nilinterhash(noescape(unsafe.Pointer(&i)), seed)
}
func ifaceHash(i interface {
F()
}, seed uintptr) uintptr {
return interhash(noescape(unsafe.Pointer(&i)), seed)
}
const hashRandomBytes = goarch.PtrSize / 4 * 64
// used in asm_{386,amd64,arm64}.s to seed the hash function
var aeskeysched [hashRandomBytes]byte
// used in hash{32,64}.go to seed the hash function
var hashkey [4]uintptr
func alginit() {
// Install AES hash algorithms if the instructions needed are present.
if (GOARCH == "386" || GOARCH == "amd64") &&
cpu.X86.HasAES && // AESENC
cpu.X86.HasSSSE3 && // PSHUFB
cpu.X86.HasSSE41 { // PINSR{D,Q}
initAlgAES()
return
}
if GOARCH == "arm64" && cpu.ARM64.HasAES {
initAlgAES()
return
}
for i := range hashkey {
hashkey[i] = uintptr(bootstrapRand())
}
}
func initAlgAES() {
useAeshash = true
// Initialize with random data so hash collisions will be hard to engineer.
key := (*[hashRandomBytes / 8]uint64)(unsafe.Pointer(&aeskeysched))
for i := range key {
key[i] = bootstrapRand()
}
}
// Note: These routines perform the read with a native endianness.
func readUnaligned32(p unsafe.Pointer) uint32 {
q := (*[4]byte)(p)
if goarch.BigEndian {
return uint32(q[3]) | uint32(q[2])<<8 | uint32(q[1])<<16 | uint32(q[0])<<24
}
return uint32(q[0]) | uint32(q[1])<<8 | uint32(q[2])<<16 | uint32(q[3])<<24
}
func readUnaligned64(p unsafe.Pointer) uint64 {
q := (*[8]byte)(p)
if goarch.BigEndian {
return uint64(q[7]) | uint64(q[6])<<8 | uint64(q[5])<<16 | uint64(q[4])<<24 |
uint64(q[3])<<32 | uint64(q[2])<<40 | uint64(q[1])<<48 | uint64(q[0])<<56
}
return uint64(q[0]) | uint64(q[1])<<8 | uint64(q[2])<<16 | uint64(q[3])<<24 | uint64(q[4])<<32 | uint64(q[5])<<40 | uint64(q[6])<<48 | uint64(q[7])<<56
}

View File

@@ -0,0 +1,51 @@
// Copyright 2022 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// This file lives in the runtime package
// so we can get access to the runtime guts.
// The rest of the implementation of this test is in align_test.go.
package runtime
import "unsafe"
// AtomicFields is the set of fields on which we perform 64-bit atomic
// operations (all the *64 operations in internal/runtime/atomic).
var AtomicFields = []uintptr{
unsafe.Offsetof(m{}.procid),
unsafe.Offsetof(p{}.gcFractionalMarkTime),
unsafe.Offsetof(profBuf{}.overflow),
unsafe.Offsetof(profBuf{}.overflowTime),
unsafe.Offsetof(heapStatsDelta{}.tinyAllocCount),
unsafe.Offsetof(heapStatsDelta{}.smallAllocCount),
unsafe.Offsetof(heapStatsDelta{}.smallFreeCount),
unsafe.Offsetof(heapStatsDelta{}.largeAlloc),
unsafe.Offsetof(heapStatsDelta{}.largeAllocCount),
unsafe.Offsetof(heapStatsDelta{}.largeFree),
unsafe.Offsetof(heapStatsDelta{}.largeFreeCount),
unsafe.Offsetof(heapStatsDelta{}.committed),
unsafe.Offsetof(heapStatsDelta{}.released),
unsafe.Offsetof(heapStatsDelta{}.inHeap),
unsafe.Offsetof(heapStatsDelta{}.inStacks),
unsafe.Offsetof(heapStatsDelta{}.inPtrScalarBits),
unsafe.Offsetof(heapStatsDelta{}.inWorkBufs),
unsafe.Offsetof(lfnode{}.next),
unsafe.Offsetof(mstats{}.last_gc_nanotime),
unsafe.Offsetof(mstats{}.last_gc_unix),
unsafe.Offsetof(workType{}.bytesMarked),
}
// AtomicVariables is the set of global variables on which we perform
// 64-bit atomic operations.
var AtomicVariables = []unsafe.Pointer{
unsafe.Pointer(&ncgocall),
unsafe.Pointer(&test_z64),
unsafe.Pointer(&blockprofilerate),
unsafe.Pointer(&mutexprofilerate),
unsafe.Pointer(&gcController),
unsafe.Pointer(&memstats),
unsafe.Pointer(&sched),
unsafe.Pointer(&ticks),
unsafe.Pointer(&work),
}

200
src/runtime/align_test.go Normal file
View File

@@ -0,0 +1,200 @@
// Copyright 2022 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package runtime_test
import (
"go/ast"
"go/build"
"go/importer"
"go/parser"
"go/printer"
"go/token"
"go/types"
"internal/testenv"
"os"
"regexp"
"runtime"
"strings"
"testing"
)
// Check that 64-bit fields on which we apply atomic operations
// are aligned to 8 bytes. This can be a problem on 32-bit systems.
func TestAtomicAlignment(t *testing.T) {
testenv.MustHaveGoBuild(t) // go command needed to resolve std .a files for importer.Default().
// Read the code making the tables above, to see which fields and
// variables we are currently checking.
checked := map[string]bool{}
x, err := os.ReadFile("./align_runtime_test.go")
if err != nil {
t.Fatalf("read failed: %v", err)
}
fieldDesc := map[int]string{}
r := regexp.MustCompile(`unsafe[.]Offsetof[(](\w+){}[.](\w+)[)]`)
matches := r.FindAllStringSubmatch(string(x), -1)
for i, v := range matches {
checked["field runtime."+v[1]+"."+v[2]] = true
fieldDesc[i] = v[1] + "." + v[2]
}
varDesc := map[int]string{}
r = regexp.MustCompile(`unsafe[.]Pointer[(]&(\w+)[)]`)
matches = r.FindAllStringSubmatch(string(x), -1)
for i, v := range matches {
checked["var "+v[1]] = true
varDesc[i] = v[1]
}
// Check all of our alignments. This is the actual core of the test.
for i, d := range runtime.AtomicFields {
if d%8 != 0 {
t.Errorf("field alignment of %s failed: offset is %d", fieldDesc[i], d)
}
}
for i, p := range runtime.AtomicVariables {
if uintptr(p)%8 != 0 {
t.Errorf("variable alignment of %s failed: address is %x", varDesc[i], p)
}
}
// The code above is the actual test. The code below attempts to check
// that the tables used by the code above are exhaustive.
// Parse the whole runtime package, checking that arguments of
// appropriate atomic operations are in the list above.
fset := token.NewFileSet()
m, err := parser.ParseDir(fset, ".", nil, 0)
if err != nil {
t.Fatalf("parsing runtime failed: %v", err)
}
pkg := m["runtime"] // Note: ignore runtime_test and main packages
// Filter files by those for the current architecture/os being tested.
fileMap := map[string]bool{}
for _, f := range buildableFiles(t, ".") {
fileMap[f] = true
}
var files []*ast.File
for fname, f := range pkg.Files {
if fileMap[fname] {
files = append(files, f)
}
}
// Call go/types to analyze the runtime package.
var info types.Info
info.Types = map[ast.Expr]types.TypeAndValue{}
conf := types.Config{Importer: importer.Default()}
_, err = conf.Check("runtime", fset, files, &info)
if err != nil {
t.Fatalf("typechecking runtime failed: %v", err)
}
// Analyze all atomic.*64 callsites.
v := Visitor{t: t, fset: fset, types: info.Types, checked: checked}
ast.Walk(&v, pkg)
}
type Visitor struct {
fset *token.FileSet
types map[ast.Expr]types.TypeAndValue
checked map[string]bool
t *testing.T
}
func (v *Visitor) Visit(n ast.Node) ast.Visitor {
c, ok := n.(*ast.CallExpr)
if !ok {
return v
}
f, ok := c.Fun.(*ast.SelectorExpr)
if !ok {
return v
}
p, ok := f.X.(*ast.Ident)
if !ok {
return v
}
if p.Name != "atomic" {
return v
}
if !strings.HasSuffix(f.Sel.Name, "64") {
return v
}
a := c.Args[0]
// This is a call to atomic.XXX64(a, ...). Make sure a is aligned to 8 bytes.
// XXX = one of Load, Store, Cas, etc.
// The arg we care about the alignment of is always the first one.
if u, ok := a.(*ast.UnaryExpr); ok && u.Op == token.AND {
v.checkAddr(u.X)
return v
}
// Other cases there's nothing we can check. Assume we're ok.
v.t.Logf("unchecked atomic operation %s %v", v.fset.Position(n.Pos()), v.print(n))
return v
}
// checkAddr checks to make sure n is a properly aligned address for a 64-bit atomic operation.
func (v *Visitor) checkAddr(n ast.Node) {
switch n := n.(type) {
case *ast.IndexExpr:
// Alignment of an array element is the same as the whole array.
v.checkAddr(n.X)
return
case *ast.Ident:
key := "var " + v.print(n)
if !v.checked[key] {
v.t.Errorf("unchecked variable %s %s", v.fset.Position(n.Pos()), key)
}
return
case *ast.SelectorExpr:
t := v.types[n.X].Type
if t == nil {
// Not sure what is happening here, go/types fails to
// type the selector arg on some platforms.
return
}
if p, ok := t.(*types.Pointer); ok {
// Note: we assume here that the pointer p in p.foo is properly
// aligned. We just check that foo is at a properly aligned offset.
t = p.Elem()
} else {
v.checkAddr(n.X)
}
if t.Underlying() == t {
v.t.Errorf("analysis can't handle unnamed type %s %v", v.fset.Position(n.Pos()), t)
}
key := "field " + t.String() + "." + n.Sel.Name
if !v.checked[key] {
v.t.Errorf("unchecked field %s %s", v.fset.Position(n.Pos()), key)
}
default:
v.t.Errorf("unchecked atomic address %s %v", v.fset.Position(n.Pos()), v.print(n))
}
}
func (v *Visitor) print(n ast.Node) string {
var b strings.Builder
printer.Fprint(&b, v.fset, n)
return b.String()
}
// buildableFiles returns the list of files in the given directory
// that are actually used for the build, given GOOS/GOARCH restrictions.
func buildableFiles(t *testing.T, dir string) []string {
ctxt := build.Default
ctxt.CgoEnabled = true
pkg, err := ctxt.ImportDir(dir, 0)
if err != nil {
t.Fatalf("can't find buildable files: %v", err)
}
return pkg.GoFiles
}

1122
src/runtime/arena.go Normal file

File diff suppressed because it is too large Load Diff

526
src/runtime/arena_test.go Normal file
View File

@@ -0,0 +1,526 @@
// Copyright 2022 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package runtime_test
import (
"internal/goarch"
"internal/runtime/atomic"
"reflect"
. "runtime"
"runtime/debug"
"testing"
"time"
"unsafe"
)
type smallScalar struct {
X uintptr
}
type smallPointer struct {
X *smallPointer
}
type smallPointerMix struct {
A *smallPointer
B byte
C *smallPointer
D [11]byte
}
type mediumScalarEven [8192]byte
type mediumScalarOdd [3321]byte
type mediumPointerEven [1024]*smallPointer
type mediumPointerOdd [1023]*smallPointer
type largeScalar [UserArenaChunkBytes + 1]byte
type largePointer [UserArenaChunkBytes/unsafe.Sizeof(&smallPointer{}) + 1]*smallPointer
func TestUserArena(t *testing.T) {
// Set GOMAXPROCS to 2 so we don't run too many of these
// tests in parallel.
defer GOMAXPROCS(GOMAXPROCS(2))
// Start a subtest so that we can clean up after any parallel tests within.
t.Run("Alloc", func(t *testing.T) {
ss := &smallScalar{5}
runSubTestUserArenaNew(t, ss, true)
sp := &smallPointer{new(smallPointer)}
runSubTestUserArenaNew(t, sp, true)
spm := &smallPointerMix{sp, 5, nil, [11]byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}}
runSubTestUserArenaNew(t, spm, true)
mse := new(mediumScalarEven)
for i := range mse {
mse[i] = 121
}
runSubTestUserArenaNew(t, mse, true)
mso := new(mediumScalarOdd)
for i := range mso {
mso[i] = 122
}
runSubTestUserArenaNew(t, mso, true)
mpe := new(mediumPointerEven)
for i := range mpe {
mpe[i] = sp
}
runSubTestUserArenaNew(t, mpe, true)
mpo := new(mediumPointerOdd)
for i := range mpo {
mpo[i] = sp
}
runSubTestUserArenaNew(t, mpo, true)
ls := new(largeScalar)
for i := range ls {
ls[i] = 123
}
// Not in parallel because we don't want to hold this large allocation live.
runSubTestUserArenaNew(t, ls, false)
lp := new(largePointer)
for i := range lp {
lp[i] = sp
}
// Not in parallel because we don't want to hold this large allocation live.
runSubTestUserArenaNew(t, lp, false)
sss := make([]smallScalar, 25)
for i := range sss {
sss[i] = smallScalar{12}
}
runSubTestUserArenaSlice(t, sss, true)
mpos := make([]mediumPointerOdd, 5)
for i := range mpos {
mpos[i] = *mpo
}
runSubTestUserArenaSlice(t, mpos, true)
sps := make([]smallPointer, UserArenaChunkBytes/unsafe.Sizeof(smallPointer{})+1)
for i := range sps {
sps[i] = *sp
}
// Not in parallel because we don't want to hold this large allocation live.
runSubTestUserArenaSlice(t, sps, false)
// Test zero-sized types.
t.Run("struct{}", func(t *testing.T) {
arena := NewUserArena()
var x any
x = (*struct{})(nil)
arena.New(&x)
if v := unsafe.Pointer(x.(*struct{})); v != ZeroBase {
t.Errorf("expected zero-sized type to be allocated as zerobase: got %x, want %x", v, ZeroBase)
}
arena.Free()
})
t.Run("[]struct{}", func(t *testing.T) {
arena := NewUserArena()
var sl []struct{}
arena.Slice(&sl, 10)
if v := unsafe.Pointer(&sl[0]); v != ZeroBase {
t.Errorf("expected zero-sized type to be allocated as zerobase: got %x, want %x", v, ZeroBase)
}
arena.Free()
})
t.Run("[]int (cap 0)", func(t *testing.T) {
arena := NewUserArena()
var sl []int
arena.Slice(&sl, 0)
if len(sl) != 0 {
t.Errorf("expected requested zero-sized slice to still have zero length: got %x, want 0", len(sl))
}
arena.Free()
})
})
// Run a GC cycle to get any arenas off the quarantine list.
GC()
if n := GlobalWaitingArenaChunks(); n != 0 {
t.Errorf("expected zero waiting arena chunks, found %d", n)
}
}
func runSubTestUserArenaNew[S comparable](t *testing.T, value *S, parallel bool) {
t.Run(reflect.TypeOf(value).Elem().Name(), func(t *testing.T) {
if parallel {
t.Parallel()
}
// Allocate and write data, enough to exhaust the arena.
//
// This is an underestimate, likely leaving some space in the arena. That's a good thing,
// because it gives us coverage of boundary cases.
n := int(UserArenaChunkBytes / unsafe.Sizeof(*value))
if n == 0 {
n = 1
}
// Create a new arena and do a bunch of operations on it.
arena := NewUserArena()
arenaValues := make([]*S, 0, n)
for j := 0; j < n; j++ {
var x any
x = (*S)(nil)
arena.New(&x)
s := x.(*S)
*s = *value
arenaValues = append(arenaValues, s)
}
// Check integrity of allocated data.
for _, s := range arenaValues {
if *s != *value {
t.Errorf("failed integrity check: got %#v, want %#v", *s, *value)
}
}
// Release the arena.
arena.Free()
})
}
func runSubTestUserArenaSlice[S comparable](t *testing.T, value []S, parallel bool) {
t.Run("[]"+reflect.TypeOf(value).Elem().Name(), func(t *testing.T) {
if parallel {
t.Parallel()
}
// Allocate and write data, enough to exhaust the arena.
//
// This is an underestimate, likely leaving some space in the arena. That's a good thing,
// because it gives us coverage of boundary cases.
n := int(UserArenaChunkBytes / (unsafe.Sizeof(*new(S)) * uintptr(cap(value))))
if n == 0 {
n = 1
}
// Create a new arena and do a bunch of operations on it.
arena := NewUserArena()
arenaValues := make([][]S, 0, n)
for j := 0; j < n; j++ {
var sl []S
arena.Slice(&sl, cap(value))
copy(sl, value)
arenaValues = append(arenaValues, sl)
}
// Check integrity of allocated data.
for _, sl := range arenaValues {
for i := range sl {
got := sl[i]
want := value[i]
if got != want {
t.Errorf("failed integrity check: got %#v, want %#v at index %d", got, want, i)
}
}
}
// Release the arena.
arena.Free()
})
}
func TestUserArenaLiveness(t *testing.T) {
t.Run("Free", func(t *testing.T) {
testUserArenaLiveness(t, false)
})
t.Run("Finalizer", func(t *testing.T) {
testUserArenaLiveness(t, true)
})
}
func testUserArenaLiveness(t *testing.T, useArenaFinalizer bool) {
// Disable the GC so that there's zero chance we try doing anything arena related *during*
// a mark phase, since otherwise a bunch of arenas could end up on the fault list.
defer debug.SetGCPercent(debug.SetGCPercent(-1))
// Defensively ensure that any full arena chunks leftover from previous tests have been cleared.
GC()
GC()
arena := NewUserArena()
// Allocate a few pointer-ful but un-initialized objects so that later we can
// place a reference to heap object at a more interesting location.
for i := 0; i < 3; i++ {
var x any
x = (*mediumPointerOdd)(nil)
arena.New(&x)
}
var x any
x = (*smallPointerMix)(nil)
arena.New(&x)
v := x.(*smallPointerMix)
var safeToFinalize atomic.Bool
var finalized atomic.Bool
v.C = new(smallPointer)
SetFinalizer(v.C, func(_ *smallPointer) {
if !safeToFinalize.Load() {
t.Error("finalized arena-referenced object unexpectedly")
}
finalized.Store(true)
})
// Make sure it stays alive.
GC()
GC()
// In order to ensure the object can be freed, we now need to make sure to use
// the entire arena. Exhaust the rest of the arena.
for i := 0; i < int(UserArenaChunkBytes/unsafe.Sizeof(mediumScalarEven{})); i++ {
var x any
x = (*mediumScalarEven)(nil)
arena.New(&x)
}
// Make sure it stays alive again.
GC()
GC()
v = nil
safeToFinalize.Store(true)
if useArenaFinalizer {
arena = nil
// Try to queue the arena finalizer.
GC()
GC()
// In order for the finalizer we actually want to run to execute,
// we need to make sure this one runs first.
if !BlockUntilEmptyFinalizerQueue(int64(2 * time.Second)) {
t.Fatal("finalizer queue was never emptied")
}
} else {
// Free the arena explicitly.
arena.Free()
}
// Try to queue the object's finalizer that we set earlier.
GC()
GC()
if !BlockUntilEmptyFinalizerQueue(int64(2 * time.Second)) {
t.Fatal("finalizer queue was never emptied")
}
if !finalized.Load() {
t.Error("expected arena-referenced object to be finalized")
}
}
func TestUserArenaClearsPointerBits(t *testing.T) {
// This is a regression test for a serious issue wherein if pointer bits
// aren't properly cleared, it's possible to allocate scalar data down
// into a previously pointer-ful area, causing misinterpretation by the GC.
// Create a large object, grab a pointer into it, and free it.
x := new([8 << 20]byte)
xp := uintptr(unsafe.Pointer(&x[124]))
var finalized atomic.Bool
SetFinalizer(x, func(_ *[8 << 20]byte) {
finalized.Store(true)
})
// Write three chunks worth of pointer data. Three gives us a
// high likelihood that when we write 2 later, we'll get the behavior
// we want.
a := NewUserArena()
for i := 0; i < int(UserArenaChunkBytes/goarch.PtrSize*3); i++ {
var x any
x = (*smallPointer)(nil)
a.New(&x)
}
a.Free()
// Recycle the arena chunks.
GC()
GC()
a = NewUserArena()
for i := 0; i < int(UserArenaChunkBytes/goarch.PtrSize*2); i++ {
var x any
x = (*smallScalar)(nil)
a.New(&x)
v := x.(*smallScalar)
// Write a pointer that should not keep x alive.
*v = smallScalar{xp}
}
KeepAlive(x)
x = nil
// Try to free x.
GC()
GC()
if !BlockUntilEmptyFinalizerQueue(int64(2 * time.Second)) {
t.Fatal("finalizer queue was never emptied")
}
if !finalized.Load() {
t.Fatal("heap allocation kept alive through non-pointer reference")
}
// Clean up the arena.
a.Free()
GC()
GC()
}
func TestUserArenaCloneString(t *testing.T) {
a := NewUserArena()
// A static string (not on heap or arena)
var s = "abcdefghij"
// Create a byte slice in the arena, initialize it with s
var b []byte
a.Slice(&b, len(s))
copy(b, s)
// Create a string as using the same memory as the byte slice, hence in
// the arena. This could be an arena API, but hasn't really been needed
// yet.
as := unsafe.String(&b[0], len(b))
// Clone should make a copy of as, since it is in the arena.
asCopy := UserArenaClone(as)
if unsafe.StringData(as) == unsafe.StringData(asCopy) {
t.Error("Clone did not make a copy")
}
// Clone should make a copy of subAs, since subAs is just part of as and so is in the arena.
subAs := as[1:3]
subAsCopy := UserArenaClone(subAs)
if unsafe.StringData(subAs) == unsafe.StringData(subAsCopy) {
t.Error("Clone did not make a copy")
}
if len(subAs) != len(subAsCopy) {
t.Errorf("Clone made an incorrect copy (bad length): %d -> %d", len(subAs), len(subAsCopy))
} else {
for i := range subAs {
if subAs[i] != subAsCopy[i] {
t.Errorf("Clone made an incorrect copy (data at index %d): %d -> %d", i, subAs[i], subAs[i])
}
}
}
// Clone should not make a copy of doubleAs, since doubleAs will be on the heap.
doubleAs := as + as
doubleAsCopy := UserArenaClone(doubleAs)
if unsafe.StringData(doubleAs) != unsafe.StringData(doubleAsCopy) {
t.Error("Clone should not have made a copy")
}
// Clone should not make a copy of s, since s is a static string.
sCopy := UserArenaClone(s)
if unsafe.StringData(s) != unsafe.StringData(sCopy) {
t.Error("Clone should not have made a copy")
}
a.Free()
}
func TestUserArenaClonePointer(t *testing.T) {
a := NewUserArena()
// Clone should not make a copy of a heap-allocated smallScalar.
x := Escape(new(smallScalar))
xCopy := UserArenaClone(x)
if unsafe.Pointer(x) != unsafe.Pointer(xCopy) {
t.Errorf("Clone should not have made a copy: %#v -> %#v", x, xCopy)
}
// Clone should make a copy of an arena-allocated smallScalar.
var i any
i = (*smallScalar)(nil)
a.New(&i)
xArena := i.(*smallScalar)
xArenaCopy := UserArenaClone(xArena)
if unsafe.Pointer(xArena) == unsafe.Pointer(xArenaCopy) {
t.Errorf("Clone should have made a copy: %#v -> %#v", xArena, xArenaCopy)
}
if *xArena != *xArenaCopy {
t.Errorf("Clone made an incorrect copy copy: %#v -> %#v", *xArena, *xArenaCopy)
}
a.Free()
}
func TestUserArenaCloneSlice(t *testing.T) {
a := NewUserArena()
// A static string (not on heap or arena)
var s = "klmnopqrstuv"
// Create a byte slice in the arena, initialize it with s
var b []byte
a.Slice(&b, len(s))
copy(b, s)
// Clone should make a copy of b, since it is in the arena.
bCopy := UserArenaClone(b)
if unsafe.Pointer(&b[0]) == unsafe.Pointer(&bCopy[0]) {
t.Errorf("Clone did not make a copy: %#v -> %#v", b, bCopy)
}
if len(b) != len(bCopy) {
t.Errorf("Clone made an incorrect copy (bad length): %d -> %d", len(b), len(bCopy))
} else {
for i := range b {
if b[i] != bCopy[i] {
t.Errorf("Clone made an incorrect copy (data at index %d): %d -> %d", i, b[i], bCopy[i])
}
}
}
// Clone should make a copy of bSub, since bSub is just part of b and so is in the arena.
bSub := b[1:3]
bSubCopy := UserArenaClone(bSub)
if unsafe.Pointer(&bSub[0]) == unsafe.Pointer(&bSubCopy[0]) {
t.Errorf("Clone did not make a copy: %#v -> %#v", bSub, bSubCopy)
}
if len(bSub) != len(bSubCopy) {
t.Errorf("Clone made an incorrect copy (bad length): %d -> %d", len(bSub), len(bSubCopy))
} else {
for i := range bSub {
if bSub[i] != bSubCopy[i] {
t.Errorf("Clone made an incorrect copy (data at index %d): %d -> %d", i, bSub[i], bSubCopy[i])
}
}
}
// Clone should not make a copy of bNotArena, since it will not be in an arena.
bNotArena := make([]byte, len(s))
copy(bNotArena, s)
bNotArenaCopy := UserArenaClone(bNotArena)
if unsafe.Pointer(&bNotArena[0]) != unsafe.Pointer(&bNotArenaCopy[0]) {
t.Error("Clone should not have made a copy")
}
a.Free()
}
func TestUserArenaClonePanic(t *testing.T) {
var s string
func() {
x := smallScalar{2}
defer func() {
if v := recover(); v != nil {
s = v.(string)
}
}()
UserArenaClone(x)
}()
if s == "" {
t.Errorf("expected panic from Clone")
}
}

69
src/runtime/asan.go Normal file
View File

@@ -0,0 +1,69 @@
// Copyright 2021 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build asan
package runtime
import (
"unsafe"
)
// Public address sanitizer API.
func ASanRead(addr unsafe.Pointer, len int) {
sp := getcallersp()
pc := getcallerpc()
doasanread(addr, uintptr(len), sp, pc)
}
func ASanWrite(addr unsafe.Pointer, len int) {
sp := getcallersp()
pc := getcallerpc()
doasanwrite(addr, uintptr(len), sp, pc)
}
// Private interface for the runtime.
const asanenabled = true
// asan{read,write} are nosplit because they may be called between
// fork and exec, when the stack must not grow. See issue #50391.
//go:linkname asanread
//go:nosplit
func asanread(addr unsafe.Pointer, sz uintptr) {
sp := getcallersp()
pc := getcallerpc()
doasanread(addr, sz, sp, pc)
}
//go:linkname asanwrite
//go:nosplit
func asanwrite(addr unsafe.Pointer, sz uintptr) {
sp := getcallersp()
pc := getcallerpc()
doasanwrite(addr, sz, sp, pc)
}
//go:noescape
func doasanread(addr unsafe.Pointer, sz, sp, pc uintptr)
//go:noescape
func doasanwrite(addr unsafe.Pointer, sz, sp, pc uintptr)
//go:noescape
func asanunpoison(addr unsafe.Pointer, sz uintptr)
//go:noescape
func asanpoison(addr unsafe.Pointer, sz uintptr)
//go:noescape
func asanregisterglobals(addr unsafe.Pointer, n uintptr)
// These are called from asan_GOARCH.s
//
//go:cgo_import_static __asan_read_go
//go:cgo_import_static __asan_write_go
//go:cgo_import_static __asan_unpoison_go
//go:cgo_import_static __asan_poison_go
//go:cgo_import_static __asan_register_globals_go

76
src/runtime/asan/asan.go Normal file
View File

@@ -0,0 +1,76 @@
// Copyright 2021 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build asan && linux && (arm64 || amd64 || loong64 || riscv64 || ppc64le)
package asan
/*
#cgo CFLAGS: -fsanitize=address
#cgo LDFLAGS: -fsanitize=address
#include <stdbool.h>
#include <stdint.h>
#include <sanitizer/asan_interface.h>
void __asan_read_go(void *addr, uintptr_t sz, void *sp, void *pc) {
if (__asan_region_is_poisoned(addr, sz)) {
__asan_report_error(pc, 0, sp, addr, false, sz);
}
}
void __asan_write_go(void *addr, uintptr_t sz, void *sp, void *pc) {
if (__asan_region_is_poisoned(addr, sz)) {
__asan_report_error(pc, 0, sp, addr, true, sz);
}
}
void __asan_unpoison_go(void *addr, uintptr_t sz) {
__asan_unpoison_memory_region(addr, sz);
}
void __asan_poison_go(void *addr, uintptr_t sz) {
__asan_poison_memory_region(addr, sz);
}
// Keep in sync with the definition in compiler-rt
// https://github.com/llvm/llvm-project/blob/main/compiler-rt/lib/asan/asan_interface_internal.h#L41
// This structure is used to describe the source location of
// a place where global was defined.
struct _asan_global_source_location {
const char *filename;
int line_no;
int column_no;
};
// Keep in sync with the definition in compiler-rt
// https://github.com/llvm/llvm-project/blob/main/compiler-rt/lib/asan/asan_interface_internal.h#L48
// So far, the current implementation is only compatible with the ASan library from version v7 to v9.
// https://github.com/llvm/llvm-project/blob/main/compiler-rt/lib/asan/asan_init_version.h
// This structure describes an instrumented global variable.
//
// TODO: If a later version of the ASan library changes __asan_global or __asan_global_source_location
// structure, we need to make the same changes.
struct _asan_global {
uintptr_t beg;
uintptr_t size;
uintptr_t size_with_redzone;
const char *name;
const char *module_name;
uintptr_t has_dynamic_init;
struct _asan_global_source_location *location;
uintptr_t odr_indicator;
};
extern void __asan_register_globals(void*, long int);
// Register global variables.
// The 'globals' is an array of structures describing 'n' globals.
void __asan_register_globals_go(void *addr, uintptr_t n) {
struct _asan_global *globals = (struct _asan_global *)(addr);
__asan_register_globals(globals, n);
}
*/
import "C"

23
src/runtime/asan0.go Normal file
View File

@@ -0,0 +1,23 @@
// Copyright 2021 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build !asan
// Dummy ASan support API, used when not built with -asan.
package runtime
import (
"unsafe"
)
const asanenabled = false
// Because asanenabled is false, none of these functions should be called.
func asanread(addr unsafe.Pointer, sz uintptr) { throw("asan") }
func asanwrite(addr unsafe.Pointer, sz uintptr) { throw("asan") }
func asanunpoison(addr unsafe.Pointer, sz uintptr) { throw("asan") }
func asanpoison(addr unsafe.Pointer, sz uintptr) { throw("asan") }
func asanregisterglobals(addr unsafe.Pointer, sz uintptr) { throw("asan") }

91
src/runtime/asan_amd64.s Normal file
View File

@@ -0,0 +1,91 @@
// Copyright 2021 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build asan
#include "go_asm.h"
#include "go_tls.h"
#include "funcdata.h"
#include "textflag.h"
// This is like race_amd64.s, but for the asan calls.
// See race_amd64.s for detailed comments.
#ifdef GOOS_windows
#define RARG0 CX
#define RARG1 DX
#define RARG2 R8
#define RARG3 R9
#else
#define RARG0 DI
#define RARG1 SI
#define RARG2 DX
#define RARG3 CX
#endif
// Called from instrumented code.
// func runtime·doasanread(addr unsafe.Pointer, sz, sp, pc uintptr)
TEXT runtime·doasanread(SB), NOSPLIT, $0-32
MOVQ addr+0(FP), RARG0
MOVQ sz+8(FP), RARG1
MOVQ sp+16(FP), RARG2
MOVQ pc+24(FP), RARG3
// void __asan_read_go(void *addr, uintptr_t sz, void *sp, void *pc);
MOVQ $__asan_read_go(SB), AX
JMP asancall<>(SB)
// func runtime·doasanwrite(addr unsafe.Pointer, sz, sp, pc uintptr)
TEXT runtime·doasanwrite(SB), NOSPLIT, $0-32
MOVQ addr+0(FP), RARG0
MOVQ sz+8(FP), RARG1
MOVQ sp+16(FP), RARG2
MOVQ pc+24(FP), RARG3
// void __asan_write_go(void *addr, uintptr_t sz, void *sp, void *pc);
MOVQ $__asan_write_go(SB), AX
JMP asancall<>(SB)
// func runtime·asanunpoison(addr unsafe.Pointer, sz uintptr)
TEXT runtime·asanunpoison(SB), NOSPLIT, $0-16
MOVQ addr+0(FP), RARG0
MOVQ sz+8(FP), RARG1
// void __asan_unpoison_go(void *addr, uintptr_t sz);
MOVQ $__asan_unpoison_go(SB), AX
JMP asancall<>(SB)
// func runtime·asanpoison(addr unsafe.Pointer, sz uintptr)
TEXT runtime·asanpoison(SB), NOSPLIT, $0-16
MOVQ addr+0(FP), RARG0
MOVQ sz+8(FP), RARG1
// void __asan_poison_go(void *addr, uintptr_t sz);
MOVQ $__asan_poison_go(SB), AX
JMP asancall<>(SB)
// func runtime·asanregisterglobals(addr unsafe.Pointer, n uintptr)
TEXT runtime·asanregisterglobals(SB), NOSPLIT, $0-16
MOVQ addr+0(FP), RARG0
MOVQ n+8(FP), RARG1
// void __asan_register_globals_go(void *addr, uintptr_t n);
MOVQ $__asan_register_globals_go(SB), AX
JMP asancall<>(SB)
// Switches SP to g0 stack and calls (AX). Arguments already set.
TEXT asancall<>(SB), NOSPLIT, $0-0
get_tls(R12)
MOVQ g(R12), R14
MOVQ SP, R12 // callee-saved, preserved across the CALL
CMPQ R14, $0
JE call // no g; still on a system stack
MOVQ g_m(R14), R13
// Switch to g0 stack.
MOVQ m_g0(R13), R10
CMPQ R10, R14
JE call // already on g0
MOVQ (g_sched+gobuf_sp)(R10), SP
call:
ANDQ $~15, SP // alignment for gcc ABI
CALL AX
MOVQ R12, SP
RET

76
src/runtime/asan_arm64.s Normal file
View File

@@ -0,0 +1,76 @@
// Copyright 2021 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build asan
#include "go_asm.h"
#include "textflag.h"
#define RARG0 R0
#define RARG1 R1
#define RARG2 R2
#define RARG3 R3
#define FARG R4
// Called from instrumented code.
// func runtime·doasanread(addr unsafe.Pointer, sz, sp, pc uintptr)
TEXT runtime·doasanread(SB), NOSPLIT, $0-32
MOVD addr+0(FP), RARG0
MOVD sz+8(FP), RARG1
MOVD sp+16(FP), RARG2
MOVD pc+24(FP), RARG3
// void __asan_read_go(void *addr, uintptr_t sz, void *sp, void *pc);
MOVD $__asan_read_go(SB), FARG
JMP asancall<>(SB)
// func runtime·doasanwrite(addr unsafe.Pointer, sz, sp, pc uintptr)
TEXT runtime·doasanwrite(SB), NOSPLIT, $0-32
MOVD addr+0(FP), RARG0
MOVD sz+8(FP), RARG1
MOVD sp+16(FP), RARG2
MOVD pc+24(FP), RARG3
// void __asan_write_go(void *addr, uintptr_t sz, void *sp, void *pc);
MOVD $__asan_write_go(SB), FARG
JMP asancall<>(SB)
// func runtime·asanunpoison(addr unsafe.Pointer, sz uintptr)
TEXT runtime·asanunpoison(SB), NOSPLIT, $0-16
MOVD addr+0(FP), RARG0
MOVD sz+8(FP), RARG1
// void __asan_unpoison_go(void *addr, uintptr_t sz);
MOVD $__asan_unpoison_go(SB), FARG
JMP asancall<>(SB)
// func runtime·asanpoison(addr unsafe.Pointer, sz uintptr)
TEXT runtime·asanpoison(SB), NOSPLIT, $0-16
MOVD addr+0(FP), RARG0
MOVD sz+8(FP), RARG1
// void __asan_poison_go(void *addr, uintptr_t sz);
MOVD $__asan_poison_go(SB), FARG
JMP asancall<>(SB)
// func runtime·asanregisterglobals(addr unsafe.Pointer, n uintptr)
TEXT runtime·asanregisterglobals(SB), NOSPLIT, $0-16
MOVD addr+0(FP), RARG0
MOVD n+8(FP), RARG1
// void __asan_register_globals_go(void *addr, uintptr_t n);
MOVD $__asan_register_globals_go(SB), FARG
JMP asancall<>(SB)
// Switches SP to g0 stack and calls (FARG). Arguments already set.
TEXT asancall<>(SB), NOSPLIT, $0-0
MOVD RSP, R19 // callee-saved
CBZ g, g0stack // no g, still on a system stack
MOVD g_m(g), R10
MOVD m_g0(R10), R11
CMP R11, g
BEQ g0stack
MOVD (g_sched+gobuf_sp)(R11), R5
MOVD R5, RSP
g0stack:
BL (FARG)
MOVD R19, RSP
RET

View File

@@ -0,0 +1,75 @@
// Copyright 2023 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build asan
#include "go_asm.h"
#include "textflag.h"
#define RARG0 R4
#define RARG1 R5
#define RARG2 R6
#define RARG3 R7
#define FARG R8
// Called from instrumented code.
// func runtime·doasanread(addr unsafe.Pointer, sz, sp, pc uintptr)
TEXT runtime·doasanread(SB), NOSPLIT, $0-32
MOVV addr+0(FP), RARG0
MOVV sz+8(FP), RARG1
MOVV sp+16(FP), RARG2
MOVV pc+24(FP), RARG3
// void __asan_read_go(void *addr, uintptr_t sz, void *sp, void *pc);
MOVV $__asan_read_go(SB), FARG
JMP asancall<>(SB)
// func runtime·doasanwrite(addr unsafe.Pointer, sz, sp, pc uintptr)
TEXT runtime·doasanwrite(SB), NOSPLIT, $0-32
MOVV addr+0(FP), RARG0
MOVV sz+8(FP), RARG1
MOVV sp+16(FP), RARG2
MOVV pc+24(FP), RARG3
// void __asan_write_go(void *addr, uintptr_t sz, void *sp, void *pc);
MOVV $__asan_write_go(SB), FARG
JMP asancall<>(SB)
// func runtime·asanunpoison(addr unsafe.Pointer, sz uintptr)
TEXT runtime·asanunpoison(SB), NOSPLIT, $0-16
MOVV addr+0(FP), RARG0
MOVV sz+8(FP), RARG1
// void __asan_unpoison_go(void *addr, uintptr_t sz);
MOVV $__asan_unpoison_go(SB), FARG
JMP asancall<>(SB)
// func runtime·asanpoison(addr unsafe.Pointer, sz uintptr)
TEXT runtime·asanpoison(SB), NOSPLIT, $0-16
MOVV addr+0(FP), RARG0
MOVV sz+8(FP), RARG1
// void __asan_poison_go(void *addr, uintptr_t sz);
MOVV $__asan_poison_go(SB), FARG
JMP asancall<>(SB)
// func runtime·asanregisterglobals(addr unsafe.Pointer, n uintptr)
TEXT runtime·asanregisterglobals(SB), NOSPLIT, $0-16
MOVV addr+0(FP), RARG0
MOVV n+8(FP), RARG1
// void __asan_register_globals_go(void *addr, uintptr_t n);
MOVV $__asan_register_globals_go(SB), FARG
JMP asancall<>(SB)
// Switches SP to g0 stack and calls (FARG). Arguments already set.
TEXT asancall<>(SB), NOSPLIT, $0-0
MOVV R3, R23 // callee-saved
BEQ g, g0stack // no g, still on a system stack
MOVV g_m(g), R14
MOVV m_g0(R14), R15
BEQ R15, g, g0stack
MOVV (g_sched+gobuf_sp)(R15), R9
MOVV R9, R3
g0stack:
JAL (FARG)
MOVV R23, R3
RET

View File

@@ -0,0 +1,87 @@
// Copyright 2022 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build asan
#include "go_asm.h"
#include "textflag.h"
#define RARG0 R3
#define RARG1 R4
#define RARG2 R5
#define RARG3 R6
#define FARG R12
// Called from instrumented code.
// func runtime·doasanread(addr unsafe.Pointer, sz, sp, pc uintptr)
TEXT runtime·doasanread(SB),NOSPLIT|NOFRAME,$0-32
MOVD addr+0(FP), RARG0
MOVD sz+8(FP), RARG1
MOVD sp+16(FP), RARG2
MOVD pc+24(FP), RARG3
// void __asan_read_go(void *addr, uintptr_t sz, void *sp, void *pc);
MOVD $__asan_read_go(SB), FARG
BR asancall<>(SB)
// func runtime·doasanwrite(addr unsafe.Pointer, sz, sp, pc uintptr)
TEXT runtime·doasanwrite(SB),NOSPLIT|NOFRAME,$0-32
MOVD addr+0(FP), RARG0
MOVD sz+8(FP), RARG1
MOVD sp+16(FP), RARG2
MOVD pc+24(FP), RARG3
// void __asan_write_go(void *addr, uintptr_t sz, void *sp, void *pc);
MOVD $__asan_write_go(SB), FARG
BR asancall<>(SB)
// func runtime·asanunpoison(addr unsafe.Pointer, sz uintptr)
TEXT runtime·asanunpoison(SB),NOSPLIT|NOFRAME,$0-16
MOVD addr+0(FP), RARG0
MOVD sz+8(FP), RARG1
// void __asan_unpoison_go(void *addr, uintptr_t sz);
MOVD $__asan_unpoison_go(SB), FARG
BR asancall<>(SB)
// func runtime·asanpoison(addr unsafe.Pointer, sz uintptr)
TEXT runtime·asanpoison(SB),NOSPLIT|NOFRAME,$0-16
MOVD addr+0(FP), RARG0
MOVD sz+8(FP), RARG1
// void __asan_poison_go(void *addr, uintptr_t sz);
MOVD $__asan_poison_go(SB), FARG
BR asancall<>(SB)
// func runtime·asanregisterglobals(addr unsafe.Pointer, n uintptr)
TEXT runtime·asanregisterglobals(SB),NOSPLIT|NOFRAME,$0-16
MOVD addr+0(FP), RARG0
MOVD n+8(FP), RARG1
// void __asan_register_globals_go(void *addr, uintptr_t n);
MOVD $__asan_register_globals_go(SB), FARG
BR asancall<>(SB)
// Switches SP to g0 stack and calls (FARG). Arguments already set.
TEXT asancall<>(SB), NOSPLIT, $0-0
// LR saved in generated prologue
// Get info from the current goroutine
MOVD runtime·tls_g(SB), R10 // g offset in TLS
MOVD 0(R10), g
MOVD g_m(g), R7 // m for g
MOVD R1, R16 // callee-saved, preserved across C call
MOVD m_g0(R7), R10 // g0 for m
CMP R10, g // same g0?
BEQ call // already on g0
MOVD (g_sched+gobuf_sp)(R10), R1 // switch R1
call:
// prepare frame for C ABI
SUB $32, R1 // create frame for callee saving LR, CR, R2 etc.
RLDCR $0, R1, $~15, R1 // align SP to 16 bytes
MOVD FARG, CTR // address of function to be called
MOVD R0, 0(R1) // clear back chain pointer
BL (CTR)
MOVD $0, R0 // C code can clobber R0 set it back to 0
MOVD R16, R1 // restore R1;
MOVD runtime·tls_g(SB), R10 // find correct g
MOVD 0(R10), g
RET
// tls_g, g value for each thread in TLS
GLOBL runtime·tls_g+0(SB), TLSBSS+DUPOK, $8

View File

@@ -0,0 +1,68 @@
// Copyright 2022 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build asan
#include "go_asm.h"
#include "textflag.h"
// Called from instrumented code.
// func runtime·doasanread(addr unsafe.Pointer, sz, sp, pc uintptr)
TEXT runtime·doasanread(SB), NOSPLIT, $0-32
MOV addr+0(FP), X10
MOV sz+8(FP), X11
MOV sp+16(FP), X12
MOV pc+24(FP), X13
// void __asan_read_go(void *addr, uintptr_t sz);
MOV $__asan_read_go(SB), X14
JMP asancall<>(SB)
// func runtime·doasanwrite(addr unsafe.Pointer, sz, sp, pc uintptr)
TEXT runtime·doasanwrite(SB), NOSPLIT, $0-32
MOV addr+0(FP), X10
MOV sz+8(FP), X11
MOV sp+16(FP), X12
MOV pc+24(FP), X13
// void __asan_write_go(void *addr, uintptr_t sz);
MOV $__asan_write_go(SB), X14
JMP asancall<>(SB)
// func runtime·asanunpoison(addr unsafe.Pointer, sz uintptr)
TEXT runtime·asanunpoison(SB), NOSPLIT, $0-16
MOV addr+0(FP), X10
MOV sz+8(FP), X11
// void __asan_unpoison_go(void *addr, uintptr_t sz);
MOV $__asan_unpoison_go(SB), X14
JMP asancall<>(SB)
// func runtime·asanpoison(addr unsafe.Pointer, sz uintptr)
TEXT runtime·asanpoison(SB), NOSPLIT, $0-16
MOV addr+0(FP), X10
MOV sz+8(FP), X11
// void __asan_poison_go(void *addr, uintptr_t sz);
MOV $__asan_poison_go(SB), X14
JMP asancall<>(SB)
// func runtime·asanregisterglobals(addr unsafe.Pointer, n uintptr)
TEXT runtime·asanregisterglobals(SB), NOSPLIT, $0-16
MOV addr+0(FP), X10
MOV n+8(FP), X11
// void __asan_register_globals_go(void *addr, uintptr_t n);
MOV $__asan_register_globals_go(SB), X14
JMP asancall<>(SB)
// Switches SP to g0 stack and calls (X14). Arguments already set.
TEXT asancall<>(SB), NOSPLIT, $0-0
MOV X2, X8 // callee-saved
BEQZ g, g0stack // no g, still on a system stack
MOV g_m(g), X21
MOV m_g0(X21), X21
BEQ X21, g, g0stack
MOV (g_sched+gobuf_sp)(X21), X2
g0stack:
JALR RA, X14
MOV X8, X2
RET

15
src/runtime/asm.s Normal file
View File

@@ -0,0 +1,15 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include "textflag.h"
#ifndef GOARCH_amd64
TEXT ·sigpanic0(SB),NOSPLIT,$0-0
JMP ·sigpanic<ABIInternal>(SB)
#endif
// See map.go comment on the need for this routine.
TEXT ·mapinitnoop<ABIInternal>(SB),NOSPLIT,$0-0
RET

1683
src/runtime/asm_386.s Normal file

File diff suppressed because it is too large Load Diff

28
src/runtime/asm_amd64.h Normal file
View File

@@ -0,0 +1,28 @@
// Copyright 2021 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Define features that are guaranteed to be supported by setting the AMD64 variable.
// If a feature is supported, there's no need to check it at runtime every time.
#ifdef GOAMD64_v2
#define hasPOPCNT
#define hasSSE42
#endif
#ifdef GOAMD64_v3
#define hasAVX
#define hasAVX2
#define hasPOPCNT
#define hasSSE42
#endif
#ifdef GOAMD64_v4
#define hasAVX
#define hasAVX2
#define hasAVX512F
#define hasAVX512BW
#define hasAVX512VL
#define hasPOPCNT
#define hasSSE42
#endif

2143
src/runtime/asm_amd64.s Normal file

File diff suppressed because it is too large Load Diff

1155
src/runtime/asm_arm.s Normal file

File diff suppressed because it is too large Load Diff

1599
src/runtime/asm_arm64.s Normal file

File diff suppressed because it is too large Load Diff

957
src/runtime/asm_loong64.s Normal file
View File

@@ -0,0 +1,957 @@
// Copyright 2022 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include "go_asm.h"
#include "go_tls.h"
#include "funcdata.h"
#include "textflag.h"
#define REGCTXT R29
TEXT runtime·rt0_go(SB),NOSPLIT|TOPFRAME,$0
// R3 = stack; R4 = argc; R5 = argv
ADDV $-24, R3
MOVW R4, 8(R3) // argc
MOVV R5, 16(R3) // argv
// create istack out of the given (operating system) stack.
// _cgo_init may update stackguard.
MOVV $runtime·g0(SB), g
MOVV $(-64*1024), R30
ADDV R30, R3, R19
MOVV R19, g_stackguard0(g)
MOVV R19, g_stackguard1(g)
MOVV R19, (g_stack+stack_lo)(g)
MOVV R3, (g_stack+stack_hi)(g)
// if there is a _cgo_init, call it using the gcc ABI.
MOVV _cgo_init(SB), R25
BEQ R25, nocgo
MOVV R0, R7 // arg 3: not used
MOVV R0, R6 // arg 2: not used
MOVV $setg_gcc<>(SB), R5 // arg 1: setg
MOVV g, R4 // arg 0: G
JAL (R25)
nocgo:
// update stackguard after _cgo_init
MOVV (g_stack+stack_lo)(g), R19
ADDV $const_stackGuard, R19
MOVV R19, g_stackguard0(g)
MOVV R19, g_stackguard1(g)
// set the per-goroutine and per-mach "registers"
MOVV $runtime·m0(SB), R19
// save m->g0 = g0
MOVV g, m_g0(R19)
// save m0 to g0->m
MOVV R19, g_m(g)
JAL runtime·check(SB)
// args are already prepared
JAL runtime·args(SB)
JAL runtime·osinit(SB)
JAL runtime·schedinit(SB)
// create a new goroutine to start program
MOVV $runtime·mainPC(SB), R19 // entry
ADDV $-16, R3
MOVV R19, 8(R3)
MOVV R0, 0(R3)
JAL runtime·newproc(SB)
ADDV $16, R3
// start this M
JAL runtime·mstart(SB)
MOVV R0, 1(R0)
RET
DATA runtime·mainPC+0(SB)/8,$runtime·main<ABIInternal>(SB)
GLOBL runtime·mainPC(SB),RODATA,$8
TEXT runtime·breakpoint(SB),NOSPLIT|NOFRAME,$0-0
BREAK
RET
TEXT runtime·asminit(SB),NOSPLIT|NOFRAME,$0-0
RET
TEXT runtime·mstart(SB),NOSPLIT|TOPFRAME,$0
JAL runtime·mstart0(SB)
RET // not reached
// func cputicks() int64
TEXT runtime·cputicks(SB),NOSPLIT,$0-8
RDTIMED R0, R4
MOVV R4, ret+0(FP)
RET
/*
* go-routine
*/
// void gogo(Gobuf*)
// restore state from Gobuf; longjmp
TEXT runtime·gogo(SB), NOSPLIT|NOFRAME, $0-8
MOVV buf+0(FP), R4
MOVV gobuf_g(R4), R5
MOVV 0(R5), R0 // make sure g != nil
JMP gogo<>(SB)
TEXT gogo<>(SB), NOSPLIT|NOFRAME, $0
MOVV R5, g
JAL runtime·save_g(SB)
MOVV gobuf_sp(R4), R3
MOVV gobuf_lr(R4), R1
MOVV gobuf_ret(R4), R19
MOVV gobuf_ctxt(R4), REGCTXT
MOVV R0, gobuf_sp(R4)
MOVV R0, gobuf_ret(R4)
MOVV R0, gobuf_lr(R4)
MOVV R0, gobuf_ctxt(R4)
MOVV gobuf_pc(R4), R6
JMP (R6)
// void mcall(fn func(*g))
// Switch to m->g0's stack, call fn(g).
// Fn must never return. It should gogo(&g->sched)
// to keep running g.
TEXT runtime·mcall<ABIInternal>(SB), NOSPLIT|NOFRAME, $0-8
MOVV R4, REGCTXT
// Save caller state in g->sched
MOVV R3, (g_sched+gobuf_sp)(g)
MOVV R1, (g_sched+gobuf_pc)(g)
MOVV R0, (g_sched+gobuf_lr)(g)
// Switch to m->g0 & its stack, call fn.
MOVV g, R4 // arg = g
MOVV g_m(g), R20
MOVV m_g0(R20), g
JAL runtime·save_g(SB)
BNE g, R4, 2(PC)
JMP runtime·badmcall(SB)
MOVV 0(REGCTXT), R20 // code pointer
MOVV (g_sched+gobuf_sp)(g), R3 // sp = m->g0->sched.sp
ADDV $-16, R3
MOVV R4, 8(R3)
MOVV R0, 0(R3)
JAL (R20)
JMP runtime·badmcall2(SB)
// systemstack_switch is a dummy routine that systemstack leaves at the bottom
// of the G stack. We need to distinguish the routine that
// lives at the bottom of the G stack from the one that lives
// at the top of the system stack because the one at the top of
// the system stack terminates the stack walk (see topofstack()).
TEXT runtime·systemstack_switch(SB), NOSPLIT, $0-0
UNDEF
JAL (R1) // make sure this function is not leaf
RET
// func systemstack(fn func())
TEXT runtime·systemstack(SB), NOSPLIT, $0-8
MOVV fn+0(FP), R19 // R19 = fn
MOVV R19, REGCTXT // context
MOVV g_m(g), R4 // R4 = m
MOVV m_gsignal(R4), R5 // R5 = gsignal
BEQ g, R5, noswitch
MOVV m_g0(R4), R5 // R5 = g0
BEQ g, R5, noswitch
MOVV m_curg(R4), R6
BEQ g, R6, switch
// Bad: g is not gsignal, not g0, not curg. What is it?
// Hide call from linker nosplit analysis.
MOVV $runtime·badsystemstack(SB), R7
JAL (R7)
JAL runtime·abort(SB)
switch:
// save our state in g->sched. Pretend to
// be systemstack_switch if the G stack is scanned.
JAL gosave_systemstack_switch<>(SB)
// switch to g0
MOVV R5, g
JAL runtime·save_g(SB)
MOVV (g_sched+gobuf_sp)(g), R19
MOVV R19, R3
// call target function
MOVV 0(REGCTXT), R6 // code pointer
JAL (R6)
// switch back to g
MOVV g_m(g), R4
MOVV m_curg(R4), g
JAL runtime·save_g(SB)
MOVV (g_sched+gobuf_sp)(g), R3
MOVV R0, (g_sched+gobuf_sp)(g)
RET
noswitch:
// already on m stack, just call directly
// Using a tail call here cleans up tracebacks since we won't stop
// at an intermediate systemstack.
MOVV 0(REGCTXT), R4 // code pointer
MOVV 0(R3), R1 // restore LR
ADDV $8, R3
JMP (R4)
// func switchToCrashStack0(fn func())
TEXT runtime·switchToCrashStack0(SB), NOSPLIT, $0-8
MOVV fn+0(FP), REGCTXT // context register
MOVV g_m(g), R4 // curm
// set g to gcrash
MOVV $runtime·gcrash(SB), g // g = &gcrash
JAL runtime·save_g(SB)
MOVV R4, g_m(g) // g.m = curm
MOVV g, m_g0(R4) // curm.g0 = g
// switch to crashstack
MOVV (g_stack+stack_hi)(g), R4
ADDV $(-4*8), R4, R3
// call target function
MOVV 0(REGCTXT), R6
JAL (R6)
// should never return
JAL runtime·abort(SB)
UNDEF
/*
* support for morestack
*/
// Called during function prolog when more stack is needed.
// Caller has already loaded:
// loong64: R31: LR
//
// The traceback routines see morestack on a g0 as being
// the top of a stack (for example, morestack calling newstack
// calling the scheduler calling newm calling gc), so we must
// record an argument size. For that purpose, it has no arguments.
TEXT runtime·morestack(SB),NOSPLIT|NOFRAME,$0-0
// Called from f.
// Set g->sched to context in f.
MOVV R3, (g_sched+gobuf_sp)(g)
MOVV R1, (g_sched+gobuf_pc)(g)
MOVV R31, (g_sched+gobuf_lr)(g)
MOVV REGCTXT, (g_sched+gobuf_ctxt)(g)
// Cannot grow scheduler stack (m->g0).
MOVV g_m(g), R7
MOVV m_g0(R7), R8
BNE g, R8, 3(PC)
JAL runtime·badmorestackg0(SB)
JAL runtime·abort(SB)
// Cannot grow signal stack (m->gsignal).
MOVV m_gsignal(R7), R8
BNE g, R8, 3(PC)
JAL runtime·badmorestackgsignal(SB)
JAL runtime·abort(SB)
// Called from f.
// Set m->morebuf to f's caller.
MOVV R31, (m_morebuf+gobuf_pc)(R7) // f's caller's PC
MOVV R3, (m_morebuf+gobuf_sp)(R7) // f's caller's SP
MOVV g, (m_morebuf+gobuf_g)(R7)
// Call newstack on m->g0's stack.
MOVV m_g0(R7), g
JAL runtime·save_g(SB)
MOVV (g_sched+gobuf_sp)(g), R3
// Create a stack frame on g0 to call newstack.
MOVV R0, -8(R3) // Zero saved LR in frame
ADDV $-8, R3
JAL runtime·newstack(SB)
// Not reached, but make sure the return PC from the call to newstack
// is still in this function, and not the beginning of the next.
UNDEF
TEXT runtime·morestack_noctxt(SB),NOSPLIT|NOFRAME,$0-0
// Force SPWRITE. This function doesn't actually write SP,
// but it is called with a special calling convention where
// the caller doesn't save LR on stack but passes it as a
// register (R5), and the unwinder currently doesn't understand.
// Make it SPWRITE to stop unwinding. (See issue 54332)
MOVV R3, R3
MOVV R0, REGCTXT
JMP runtime·morestack(SB)
// reflectcall: call a function with the given argument list
// func call(stackArgsType *_type, f *FuncVal, stackArgs *byte, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs).
// we don't have variable-sized frames, so we use a small number
// of constant-sized-frame functions to encode a few bits of size in the pc.
// Caution: ugly multiline assembly macros in your future!
#define DISPATCH(NAME,MAXSIZE) \
MOVV $MAXSIZE, R30; \
SGTU R19, R30, R30; \
BNE R30, 3(PC); \
MOVV $NAME(SB), R4; \
JMP (R4)
// Note: can't just "BR NAME(SB)" - bad inlining results.
TEXT ·reflectcall(SB), NOSPLIT|NOFRAME, $0-48
MOVWU frameSize+32(FP), R19
DISPATCH(runtime·call32, 32)
DISPATCH(runtime·call64, 64)
DISPATCH(runtime·call128, 128)
DISPATCH(runtime·call256, 256)
DISPATCH(runtime·call512, 512)
DISPATCH(runtime·call1024, 1024)
DISPATCH(runtime·call2048, 2048)
DISPATCH(runtime·call4096, 4096)
DISPATCH(runtime·call8192, 8192)
DISPATCH(runtime·call16384, 16384)
DISPATCH(runtime·call32768, 32768)
DISPATCH(runtime·call65536, 65536)
DISPATCH(runtime·call131072, 131072)
DISPATCH(runtime·call262144, 262144)
DISPATCH(runtime·call524288, 524288)
DISPATCH(runtime·call1048576, 1048576)
DISPATCH(runtime·call2097152, 2097152)
DISPATCH(runtime·call4194304, 4194304)
DISPATCH(runtime·call8388608, 8388608)
DISPATCH(runtime·call16777216, 16777216)
DISPATCH(runtime·call33554432, 33554432)
DISPATCH(runtime·call67108864, 67108864)
DISPATCH(runtime·call134217728, 134217728)
DISPATCH(runtime·call268435456, 268435456)
DISPATCH(runtime·call536870912, 536870912)
DISPATCH(runtime·call1073741824, 1073741824)
MOVV $runtime·badreflectcall(SB), R4
JMP (R4)
#define CALLFN(NAME,MAXSIZE) \
TEXT NAME(SB), WRAPPER, $MAXSIZE-48; \
NO_LOCAL_POINTERS; \
/* copy arguments to stack */ \
MOVV arg+16(FP), R4; \
MOVWU argsize+24(FP), R5; \
MOVV R3, R12; \
ADDV $8, R12; \
ADDV R12, R5; \
BEQ R12, R5, 6(PC); \
MOVBU (R4), R6; \
ADDV $1, R4; \
MOVBU R6, (R12); \
ADDV $1, R12; \
JMP -5(PC); \
/* set up argument registers */ \
MOVV regArgs+40(FP), R25; \
JAL ·unspillArgs(SB); \
/* call function */ \
MOVV f+8(FP), REGCTXT; \
MOVV (REGCTXT), R25; \
PCDATA $PCDATA_StackMapIndex, $0; \
JAL (R25); \
/* copy return values back */ \
MOVV regArgs+40(FP), R25; \
JAL ·spillArgs(SB); \
MOVV argtype+0(FP), R7; \
MOVV arg+16(FP), R4; \
MOVWU n+24(FP), R5; \
MOVWU retoffset+28(FP), R6; \
ADDV $8, R3, R12; \
ADDV R6, R12; \
ADDV R6, R4; \
SUBVU R6, R5; \
JAL callRet<>(SB); \
RET
// callRet copies return values back at the end of call*. This is a
// separate function so it can allocate stack space for the arguments
// to reflectcallmove. It does not follow the Go ABI; it expects its
// arguments in registers.
TEXT callRet<>(SB), NOSPLIT, $40-0
NO_LOCAL_POINTERS
MOVV R7, 8(R3)
MOVV R4, 16(R3)
MOVV R12, 24(R3)
MOVV R5, 32(R3)
MOVV R25, 40(R3)
JAL runtime·reflectcallmove(SB)
RET
CALLFN(·call16, 16)
CALLFN(·call32, 32)
CALLFN(·call64, 64)
CALLFN(·call128, 128)
CALLFN(·call256, 256)
CALLFN(·call512, 512)
CALLFN(·call1024, 1024)
CALLFN(·call2048, 2048)
CALLFN(·call4096, 4096)
CALLFN(·call8192, 8192)
CALLFN(·call16384, 16384)
CALLFN(·call32768, 32768)
CALLFN(·call65536, 65536)
CALLFN(·call131072, 131072)
CALLFN(·call262144, 262144)
CALLFN(·call524288, 524288)
CALLFN(·call1048576, 1048576)
CALLFN(·call2097152, 2097152)
CALLFN(·call4194304, 4194304)
CALLFN(·call8388608, 8388608)
CALLFN(·call16777216, 16777216)
CALLFN(·call33554432, 33554432)
CALLFN(·call67108864, 67108864)
CALLFN(·call134217728, 134217728)
CALLFN(·call268435456, 268435456)
CALLFN(·call536870912, 536870912)
CALLFN(·call1073741824, 1073741824)
TEXT runtime·procyield(SB),NOSPLIT,$0-0
RET
// Save state of caller into g->sched.
// but using fake PC from systemstack_switch.
// Must only be called from functions with no locals ($0)
// or else unwinding from systemstack_switch is incorrect.
// Smashes R19.
TEXT gosave_systemstack_switch<>(SB),NOSPLIT|NOFRAME,$0
MOVV $runtime·systemstack_switch(SB), R19
ADDV $8, R19
MOVV R19, (g_sched+gobuf_pc)(g)
MOVV R3, (g_sched+gobuf_sp)(g)
MOVV R0, (g_sched+gobuf_lr)(g)
MOVV R0, (g_sched+gobuf_ret)(g)
// Assert ctxt is zero. See func save.
MOVV (g_sched+gobuf_ctxt)(g), R19
BEQ R19, 2(PC)
JAL runtime·abort(SB)
RET
// func asmcgocall(fn, arg unsafe.Pointer) int32
// Call fn(arg) on the scheduler stack,
// aligned appropriately for the gcc ABI.
// See cgocall.go for more details.
TEXT ·asmcgocall(SB),NOSPLIT,$0-20
MOVV fn+0(FP), R25
MOVV arg+8(FP), R4
MOVV R3, R12 // save original stack pointer
MOVV g, R13
// Figure out if we need to switch to m->g0 stack.
// We get called to create new OS threads too, and those
// come in on the m->g0 stack already.
MOVV g_m(g), R5
MOVV m_gsignal(R5), R6
BEQ R6, g, g0
MOVV m_g0(R5), R6
BEQ R6, g, g0
JAL gosave_systemstack_switch<>(SB)
MOVV R6, g
JAL runtime·save_g(SB)
MOVV (g_sched+gobuf_sp)(g), R3
// Now on a scheduling stack (a pthread-created stack).
g0:
// Save room for two of our pointers.
ADDV $-16, R3
MOVV R13, 0(R3) // save old g on stack
MOVV (g_stack+stack_hi)(R13), R13
SUBVU R12, R13
MOVV R13, 8(R3) // save depth in old g stack (can't just save SP, as stack might be copied during a callback)
JAL (R25)
// Restore g, stack pointer. R4 is return value.
MOVV 0(R3), g
JAL runtime·save_g(SB)
MOVV (g_stack+stack_hi)(g), R5
MOVV 8(R3), R6
SUBVU R6, R5
MOVV R5, R3
MOVW R4, ret+16(FP)
RET
// func cgocallback(fn, frame unsafe.Pointer, ctxt uintptr)
// See cgocall.go for more details.
TEXT ·cgocallback(SB),NOSPLIT,$24-24
NO_LOCAL_POINTERS
// Skip cgocallbackg, just dropm when fn is nil, and frame is the saved g.
// It is used to dropm while thread is exiting.
MOVV fn+0(FP), R5
BNE R5, loadg
// Restore the g from frame.
MOVV frame+8(FP), g
JMP dropm
loadg:
// Load m and g from thread-local storage.
MOVB runtime·iscgo(SB), R19
BEQ R19, nocgo
JAL runtime·load_g(SB)
nocgo:
// If g is nil, Go did not create the current thread,
// or if this thread never called into Go on pthread platforms.
// Call needm to obtain one for temporary use.
// In this case, we're running on the thread stack, so there's
// lots of space, but the linker doesn't know. Hide the call from
// the linker analysis by using an indirect call.
BEQ g, needm
MOVV g_m(g), R12
MOVV R12, savedm-8(SP)
JMP havem
needm:
MOVV g, savedm-8(SP) // g is zero, so is m.
MOVV $runtime·needAndBindM(SB), R4
JAL (R4)
// Set m->sched.sp = SP, so that if a panic happens
// during the function we are about to execute, it will
// have a valid SP to run on the g0 stack.
// The next few lines (after the havem label)
// will save this SP onto the stack and then write
// the same SP back to m->sched.sp. That seems redundant,
// but if an unrecovered panic happens, unwindm will
// restore the g->sched.sp from the stack location
// and then systemstack will try to use it. If we don't set it here,
// that restored SP will be uninitialized (typically 0) and
// will not be usable.
MOVV g_m(g), R12
MOVV m_g0(R12), R19
MOVV R3, (g_sched+gobuf_sp)(R19)
havem:
// Now there's a valid m, and we're running on its m->g0.
// Save current m->g0->sched.sp on stack and then set it to SP.
// Save current sp in m->g0->sched.sp in preparation for
// switch back to m->curg stack.
// NOTE: unwindm knows that the saved g->sched.sp is at 8(R29) aka savedsp-16(SP).
MOVV m_g0(R12), R19
MOVV (g_sched+gobuf_sp)(R19), R13
MOVV R13, savedsp-24(SP) // must match frame size
MOVV R3, (g_sched+gobuf_sp)(R19)
// Switch to m->curg stack and call runtime.cgocallbackg.
// Because we are taking over the execution of m->curg
// but *not* resuming what had been running, we need to
// save that information (m->curg->sched) so we can restore it.
// We can restore m->curg->sched.sp easily, because calling
// runtime.cgocallbackg leaves SP unchanged upon return.
// To save m->curg->sched.pc, we push it onto the stack.
// This has the added benefit that it looks to the traceback
// routine like cgocallbackg is going to return to that
// PC (because the frame we allocate below has the same
// size as cgocallback_gofunc's frame declared above)
// so that the traceback will seamlessly trace back into
// the earlier calls.
MOVV m_curg(R12), g
JAL runtime·save_g(SB)
MOVV (g_sched+gobuf_sp)(g), R13 // prepare stack as R13
MOVV (g_sched+gobuf_pc)(g), R4
MOVV R4, -(24+8)(R13) // "saved LR"; must match frame size
MOVV fn+0(FP), R5
MOVV frame+8(FP), R6
MOVV ctxt+16(FP), R7
MOVV $-(24+8)(R13), R3
MOVV R5, 8(R3)
MOVV R6, 16(R3)
MOVV R7, 24(R3)
JAL runtime·cgocallbackg(SB)
// Restore g->sched (== m->curg->sched) from saved values.
MOVV 0(R3), R4
MOVV R4, (g_sched+gobuf_pc)(g)
MOVV $(24+8)(R3), R13 // must match frame size
MOVV R13, (g_sched+gobuf_sp)(g)
// Switch back to m->g0's stack and restore m->g0->sched.sp.
// (Unlike m->curg, the g0 goroutine never uses sched.pc,
// so we do not have to restore it.)
MOVV g_m(g), R12
MOVV m_g0(R12), g
JAL runtime·save_g(SB)
MOVV (g_sched+gobuf_sp)(g), R3
MOVV savedsp-24(SP), R13 // must match frame size
MOVV R13, (g_sched+gobuf_sp)(g)
// If the m on entry was nil, we called needm above to borrow an m,
// 1. for the duration of the call on non-pthread platforms,
// 2. or the duration of the C thread alive on pthread platforms.
// If the m on entry wasn't nil,
// 1. the thread might be a Go thread,
// 2. or it wasn't the first call from a C thread on pthread platforms,
// since then we skip dropm to resue the m in the first call.
MOVV savedm-8(SP), R12
BNE R12, droppedm
// Skip dropm to reuse it in the next call, when a pthread key has been created.
MOVV _cgo_pthread_key_created(SB), R12
// It means cgo is disabled when _cgo_pthread_key_created is a nil pointer, need dropm.
BEQ R12, dropm
MOVV (R12), R12
BNE R12, droppedm
dropm:
MOVV $runtime·dropm(SB), R4
JAL (R4)
droppedm:
// Done!
RET
// void setg(G*); set g. for use by needm.
TEXT runtime·setg(SB), NOSPLIT, $0-8
MOVV gg+0(FP), g
// This only happens if iscgo, so jump straight to save_g
JAL runtime·save_g(SB)
RET
// void setg_gcc(G*); set g called from gcc with g in R19
TEXT setg_gcc<>(SB),NOSPLIT,$0-0
MOVV R19, g
JAL runtime·save_g(SB)
RET
TEXT runtime·abort(SB),NOSPLIT|NOFRAME,$0-0
MOVW (R0), R0
UNDEF
// AES hashing not implemented for loong64
TEXT runtime·memhash<ABIInternal>(SB),NOSPLIT|NOFRAME,$0-32
JMP runtime·memhashFallback<ABIInternal>(SB)
TEXT runtime·strhash<ABIInternal>(SB),NOSPLIT|NOFRAME,$0-24
JMP runtime·strhashFallback<ABIInternal>(SB)
TEXT runtime·memhash32<ABIInternal>(SB),NOSPLIT|NOFRAME,$0-24
JMP runtime·memhash32Fallback<ABIInternal>(SB)
TEXT runtime·memhash64<ABIInternal>(SB),NOSPLIT|NOFRAME,$0-24
JMP runtime·memhash64Fallback<ABIInternal>(SB)
TEXT runtime·return0(SB), NOSPLIT, $0
MOVW $0, R19
RET
// Called from cgo wrappers, this function returns g->m->curg.stack.hi.
// Must obey the gcc calling convention.
TEXT _cgo_topofstack(SB),NOSPLIT,$16
// g (R22) and REGTMP (R30) might be clobbered by load_g. They
// are callee-save in the gcc calling convention, so save them.
MOVV R30, savedREGTMP-16(SP)
MOVV g, savedG-8(SP)
JAL runtime·load_g(SB)
MOVV g_m(g), R19
MOVV m_curg(R19), R19
MOVV (g_stack+stack_hi)(R19), R4 // return value in R4
MOVV savedG-8(SP), g
MOVV savedREGTMP-16(SP), R30
RET
// The top-most function running on a goroutine
// returns to goexit+PCQuantum.
TEXT runtime·goexit(SB),NOSPLIT|NOFRAME|TOPFRAME,$0-0
NOOP
JAL runtime·goexit1(SB) // does not return
// traceback from goexit1 must hit code range of goexit
NOOP
// This is called from .init_array and follows the platform, not Go, ABI.
TEXT runtime·addmoduledata(SB),NOSPLIT,$0-0
ADDV $-0x10, R3
MOVV R30, 8(R3) // The access to global variables below implicitly uses R30, which is callee-save
MOVV runtime·lastmoduledatap(SB), R12
MOVV R4, moduledata_next(R12)
MOVV R4, runtime·lastmoduledatap(SB)
MOVV 8(R3), R30
ADDV $0x10, R3
RET
TEXT ·checkASM(SB),NOSPLIT,$0-1
MOVW $1, R19
MOVB R19, ret+0(FP)
RET
// spillArgs stores return values from registers to a *internal/abi.RegArgs in R25.
TEXT ·spillArgs(SB),NOSPLIT,$0-0
MOVV R4, (0*8)(R25)
MOVV R5, (1*8)(R25)
MOVV R6, (2*8)(R25)
MOVV R7, (3*8)(R25)
MOVV R8, (4*8)(R25)
MOVV R9, (5*8)(R25)
MOVV R10, (6*8)(R25)
MOVV R11, (7*8)(R25)
MOVV R12, (8*8)(R25)
MOVV R13, (9*8)(R25)
MOVV R14, (10*8)(R25)
MOVV R15, (11*8)(R25)
MOVV R16, (12*8)(R25)
MOVV R17, (13*8)(R25)
MOVV R18, (14*8)(R25)
MOVV R19, (15*8)(R25)
MOVD F0, (16*8)(R25)
MOVD F1, (17*8)(R25)
MOVD F2, (18*8)(R25)
MOVD F3, (19*8)(R25)
MOVD F4, (20*8)(R25)
MOVD F5, (21*8)(R25)
MOVD F6, (22*8)(R25)
MOVD F7, (23*8)(R25)
MOVD F8, (24*8)(R25)
MOVD F9, (25*8)(R25)
MOVD F10, (26*8)(R25)
MOVD F11, (27*8)(R25)
MOVD F12, (28*8)(R25)
MOVD F13, (29*8)(R25)
MOVD F14, (30*8)(R25)
MOVD F15, (31*8)(R25)
RET
// unspillArgs loads args into registers from a *internal/abi.RegArgs in R25.
TEXT ·unspillArgs(SB),NOSPLIT,$0-0
MOVV (0*8)(R25), R4
MOVV (1*8)(R25), R5
MOVV (2*8)(R25), R6
MOVV (3*8)(R25), R7
MOVV (4*8)(R25), R8
MOVV (5*8)(R25), R9
MOVV (6*8)(R25), R10
MOVV (7*8)(R25), R11
MOVV (8*8)(R25), R12
MOVV (9*8)(R25), R13
MOVV (10*8)(R25), R14
MOVV (11*8)(R25), R15
MOVV (12*8)(R25), R16
MOVV (13*8)(R25), R17
MOVV (14*8)(R25), R18
MOVV (15*8)(R25), R19
MOVD (16*8)(R25), F0
MOVD (17*8)(R25), F1
MOVD (18*8)(R25), F2
MOVD (19*8)(R25), F3
MOVD (20*8)(R25), F4
MOVD (21*8)(R25), F5
MOVD (22*8)(R25), F6
MOVD (23*8)(R25), F7
MOVD (24*8)(R25), F8
MOVD (25*8)(R25), F9
MOVD (26*8)(R25), F10
MOVD (27*8)(R25), F11
MOVD (28*8)(R25), F12
MOVD (29*8)(R25), F13
MOVD (30*8)(R25), F14
MOVD (31*8)(R25), F15
RET
// gcWriteBarrier informs the GC about heap pointer writes.
//
// gcWriteBarrier does NOT follow the Go ABI. It accepts the
// number of bytes of buffer needed in R29, and returns a pointer
// to the buffer space in R29.
// It clobbers R30 (the linker temp register).
// The act of CALLing gcWriteBarrier will clobber R1 (LR).
// It does not clobber any other general-purpose registers,
// but may clobber others (e.g., floating point registers).
TEXT gcWriteBarrier<>(SB),NOSPLIT,$216
// Save the registers clobbered by the fast path.
MOVV R19, 208(R3)
MOVV R13, 216(R3)
retry:
MOVV g_m(g), R19
MOVV m_p(R19), R19
MOVV (p_wbBuf+wbBuf_next)(R19), R13
MOVV (p_wbBuf+wbBuf_end)(R19), R30 // R30 is linker temp register
// Increment wbBuf.next position.
ADDV R29, R13
// Is the buffer full?
BLTU R30, R13, flush
// Commit to the larger buffer.
MOVV R13, (p_wbBuf+wbBuf_next)(R19)
// Make return value (the original next position)
SUBV R29, R13, R29
// Restore registers.
MOVV 208(R3), R19
MOVV 216(R3), R13
RET
flush:
// Save all general purpose registers since these could be
// clobbered by wbBufFlush and were not saved by the caller.
MOVV R27, 8(R3)
MOVV R28, 16(R3)
// R1 is LR, which was saved by the prologue.
MOVV R2, 24(R3)
// R3 is SP.
MOVV R4, 32(R3)
MOVV R5, 40(R3)
MOVV R6, 48(R3)
MOVV R7, 56(R3)
MOVV R8, 64(R3)
MOVV R9, 72(R3)
MOVV R10, 80(R3)
MOVV R11, 88(R3)
MOVV R12, 96(R3)
// R13 already saved
MOVV R14, 104(R3)
MOVV R15, 112(R3)
MOVV R16, 120(R3)
MOVV R17, 128(R3)
MOVV R18, 136(R3)
// R19 already saved
MOVV R20, 144(R3)
MOVV R21, 152(R3)
// R22 is g.
MOVV R23, 160(R3)
MOVV R24, 168(R3)
MOVV R25, 176(R3)
MOVV R26, 184(R3)
// R27 already saved
// R28 already saved.
MOVV R29, 192(R3)
// R30 is tmp register.
MOVV R31, 200(R3)
CALL runtime·wbBufFlush(SB)
MOVV 8(R3), R27
MOVV 16(R3), R28
MOVV 24(R3), R2
MOVV 32(R3), R4
MOVV 40(R3), R5
MOVV 48(R3), R6
MOVV 56(R3), R7
MOVV 64(R3), R8
MOVV 72(R3), R9
MOVV 80(R3), R10
MOVV 88(R3), R11
MOVV 96(R3), R12
MOVV 104(R3), R14
MOVV 112(R3), R15
MOVV 120(R3), R16
MOVV 128(R3), R17
MOVV 136(R3), R18
MOVV 144(R3), R20
MOVV 152(R3), R21
MOVV 160(R3), R23
MOVV 168(R3), R24
MOVV 176(R3), R25
MOVV 184(R3), R26
MOVV 192(R3), R29
MOVV 200(R3), R31
JMP retry
TEXT runtime·gcWriteBarrier1<ABIInternal>(SB),NOSPLIT,$0
MOVV $8, R29
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier2<ABIInternal>(SB),NOSPLIT,$0
MOVV $16, R29
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier3<ABIInternal>(SB),NOSPLIT,$0
MOVV $24, R29
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier4<ABIInternal>(SB),NOSPLIT,$0
MOVV $32, R29
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier5<ABIInternal>(SB),NOSPLIT,$0
MOVV $40, R29
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier6<ABIInternal>(SB),NOSPLIT,$0
MOVV $48, R29
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier7<ABIInternal>(SB),NOSPLIT,$0
MOVV $56, R29
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier8<ABIInternal>(SB),NOSPLIT,$0
MOVV $64, R29
JMP gcWriteBarrier<>(SB)
// Note: these functions use a special calling convention to save generated code space.
// Arguments are passed in registers, but the space for those arguments are allocated
// in the caller's stack frame. These stubs write the args into that stack space and
// then tail call to the corresponding runtime handler.
// The tail call makes these stubs disappear in backtraces.
TEXT runtime·panicIndex<ABIInternal>(SB),NOSPLIT,$0-16
MOVV R20, R4
MOVV R21, R5
JMP runtime·goPanicIndex<ABIInternal>(SB)
TEXT runtime·panicIndexU<ABIInternal>(SB),NOSPLIT,$0-16
MOVV R20, R4
MOVV R21, R5
JMP runtime·goPanicIndexU<ABIInternal>(SB)
TEXT runtime·panicSliceAlen<ABIInternal>(SB),NOSPLIT,$0-16
MOVV R21, R4
MOVV R23, R5
JMP runtime·goPanicSliceAlen<ABIInternal>(SB)
TEXT runtime·panicSliceAlenU<ABIInternal>(SB),NOSPLIT,$0-16
MOVV R21, R4
MOVV R23, R5
JMP runtime·goPanicSliceAlenU<ABIInternal>(SB)
TEXT runtime·panicSliceAcap<ABIInternal>(SB),NOSPLIT,$0-16
MOVV R21, R4
MOVV R23, R5
JMP runtime·goPanicSliceAcap<ABIInternal>(SB)
TEXT runtime·panicSliceAcapU<ABIInternal>(SB),NOSPLIT,$0-16
MOVV R21, R4
MOVV R23, R5
JMP runtime·goPanicSliceAcapU<ABIInternal>(SB)
TEXT runtime·panicSliceB<ABIInternal>(SB),NOSPLIT,$0-16
MOVV R20, R4
MOVV R21, R5
JMP runtime·goPanicSliceB<ABIInternal>(SB)
TEXT runtime·panicSliceBU<ABIInternal>(SB),NOSPLIT,$0-16
MOVV R20, R4
MOVV R21, R5
JMP runtime·goPanicSliceBU<ABIInternal>(SB)
TEXT runtime·panicSlice3Alen<ABIInternal>(SB),NOSPLIT,$0-16
MOVV R23, R4
MOVV R24, R5
JMP runtime·goPanicSlice3Alen<ABIInternal>(SB)
TEXT runtime·panicSlice3AlenU<ABIInternal>(SB),NOSPLIT,$0-16
MOVV R23, R4
MOVV R24, R5
JMP runtime·goPanicSlice3AlenU<ABIInternal>(SB)
TEXT runtime·panicSlice3Acap<ABIInternal>(SB),NOSPLIT,$0-16
MOVV R23, R4
MOVV R24, R5
JMP runtime·goPanicSlice3Acap<ABIInternal>(SB)
TEXT runtime·panicSlice3AcapU<ABIInternal>(SB),NOSPLIT,$0-16
MOVV R23, R4
MOVV R24, R5
JMP runtime·goPanicSlice3AcapU<ABIInternal>(SB)
TEXT runtime·panicSlice3B<ABIInternal>(SB),NOSPLIT,$0-16
MOVV R21, R4
MOVV R23, R5
JMP runtime·goPanicSlice3B<ABIInternal>(SB)
TEXT runtime·panicSlice3BU<ABIInternal>(SB),NOSPLIT,$0-16
MOVV R21, R4
MOVV R23, R5
JMP runtime·goPanicSlice3BU<ABIInternal>(SB)
TEXT runtime·panicSlice3C<ABIInternal>(SB),NOSPLIT,$0-16
MOVV R20, R4
MOVV R21, R5
JMP runtime·goPanicSlice3C<ABIInternal>(SB)
TEXT runtime·panicSlice3CU<ABIInternal>(SB),NOSPLIT,$0-16
MOVV R20, R4
MOVV R21, R5
JMP runtime·goPanicSlice3CU<ABIInternal>(SB)
TEXT runtime·panicSliceConvert<ABIInternal>(SB),NOSPLIT,$0-16
MOVV R23, R4
MOVV R24, R5
JMP runtime·goPanicSliceConvert<ABIInternal>(SB)

873
src/runtime/asm_mips64x.s Normal file
View File

@@ -0,0 +1,873 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build mips64 || mips64le
#include "go_asm.h"
#include "go_tls.h"
#include "funcdata.h"
#include "textflag.h"
#define REGCTXT R22
TEXT runtime·rt0_go(SB),NOSPLIT|TOPFRAME,$0
// R29 = stack; R4 = argc; R5 = argv
ADDV $-24, R29
MOVW R4, 8(R29) // argc
MOVV R5, 16(R29) // argv
// create istack out of the given (operating system) stack.
// _cgo_init may update stackguard.
MOVV $runtime·g0(SB), g
MOVV $(-64*1024), R23
ADDV R23, R29, R1
MOVV R1, g_stackguard0(g)
MOVV R1, g_stackguard1(g)
MOVV R1, (g_stack+stack_lo)(g)
MOVV R29, (g_stack+stack_hi)(g)
// if there is a _cgo_init, call it using the gcc ABI.
MOVV _cgo_init(SB), R25
BEQ R25, nocgo
MOVV R0, R7 // arg 3: not used
MOVV R0, R6 // arg 2: not used
MOVV $setg_gcc<>(SB), R5 // arg 1: setg
MOVV g, R4 // arg 0: G
JAL (R25)
nocgo:
// update stackguard after _cgo_init
MOVV (g_stack+stack_lo)(g), R1
ADDV $const_stackGuard, R1
MOVV R1, g_stackguard0(g)
MOVV R1, g_stackguard1(g)
// set the per-goroutine and per-mach "registers"
MOVV $runtime·m0(SB), R1
// save m->g0 = g0
MOVV g, m_g0(R1)
// save m0 to g0->m
MOVV R1, g_m(g)
JAL runtime·check(SB)
// args are already prepared
JAL runtime·args(SB)
JAL runtime·osinit(SB)
JAL runtime·schedinit(SB)
// create a new goroutine to start program
MOVV $runtime·mainPC(SB), R1 // entry
ADDV $-16, R29
MOVV R1, 8(R29)
MOVV R0, 0(R29)
JAL runtime·newproc(SB)
ADDV $16, R29
// start this M
JAL runtime·mstart(SB)
MOVV R0, 1(R0)
RET
DATA runtime·mainPC+0(SB)/8,$runtime·main(SB)
GLOBL runtime·mainPC(SB),RODATA,$8
TEXT runtime·breakpoint(SB),NOSPLIT|NOFRAME,$0-0
MOVV R0, 2(R0) // TODO: TD
RET
TEXT runtime·asminit(SB),NOSPLIT|NOFRAME,$0-0
RET
TEXT runtime·mstart(SB),NOSPLIT|TOPFRAME,$0
JAL runtime·mstart0(SB)
RET // not reached
/*
* go-routine
*/
// void gogo(Gobuf*)
// restore state from Gobuf; longjmp
TEXT runtime·gogo(SB), NOSPLIT|NOFRAME, $0-8
MOVV buf+0(FP), R3
MOVV gobuf_g(R3), R4
MOVV 0(R4), R0 // make sure g != nil
JMP gogo<>(SB)
TEXT gogo<>(SB), NOSPLIT|NOFRAME, $0
MOVV R4, g
JAL runtime·save_g(SB)
MOVV 0(g), R2
MOVV gobuf_sp(R3), R29
MOVV gobuf_lr(R3), R31
MOVV gobuf_ret(R3), R1
MOVV gobuf_ctxt(R3), REGCTXT
MOVV R0, gobuf_sp(R3)
MOVV R0, gobuf_ret(R3)
MOVV R0, gobuf_lr(R3)
MOVV R0, gobuf_ctxt(R3)
MOVV gobuf_pc(R3), R4
JMP (R4)
// void mcall(fn func(*g))
// Switch to m->g0's stack, call fn(g).
// Fn must never return. It should gogo(&g->sched)
// to keep running g.
TEXT runtime·mcall(SB), NOSPLIT|NOFRAME, $0-8
// Save caller state in g->sched
MOVV R29, (g_sched+gobuf_sp)(g)
MOVV R31, (g_sched+gobuf_pc)(g)
MOVV R0, (g_sched+gobuf_lr)(g)
// Switch to m->g0 & its stack, call fn.
MOVV g, R1
MOVV g_m(g), R3
MOVV m_g0(R3), g
JAL runtime·save_g(SB)
BNE g, R1, 2(PC)
JMP runtime·badmcall(SB)
MOVV fn+0(FP), REGCTXT // context
MOVV 0(REGCTXT), R4 // code pointer
MOVV (g_sched+gobuf_sp)(g), R29 // sp = m->g0->sched.sp
ADDV $-16, R29
MOVV R1, 8(R29)
MOVV R0, 0(R29)
JAL (R4)
JMP runtime·badmcall2(SB)
// systemstack_switch is a dummy routine that systemstack leaves at the bottom
// of the G stack. We need to distinguish the routine that
// lives at the bottom of the G stack from the one that lives
// at the top of the system stack because the one at the top of
// the system stack terminates the stack walk (see topofstack()).
TEXT runtime·systemstack_switch(SB), NOSPLIT, $0-0
UNDEF
JAL (R31) // make sure this function is not leaf
RET
// func systemstack(fn func())
TEXT runtime·systemstack(SB), NOSPLIT, $0-8
MOVV fn+0(FP), R1 // R1 = fn
MOVV R1, REGCTXT // context
MOVV g_m(g), R2 // R2 = m
MOVV m_gsignal(R2), R3 // R3 = gsignal
BEQ g, R3, noswitch
MOVV m_g0(R2), R3 // R3 = g0
BEQ g, R3, noswitch
MOVV m_curg(R2), R4
BEQ g, R4, switch
// Bad: g is not gsignal, not g0, not curg. What is it?
// Hide call from linker nosplit analysis.
MOVV $runtime·badsystemstack(SB), R4
JAL (R4)
JAL runtime·abort(SB)
switch:
// save our state in g->sched. Pretend to
// be systemstack_switch if the G stack is scanned.
JAL gosave_systemstack_switch<>(SB)
// switch to g0
MOVV R3, g
JAL runtime·save_g(SB)
MOVV (g_sched+gobuf_sp)(g), R1
MOVV R1, R29
// call target function
MOVV 0(REGCTXT), R4 // code pointer
JAL (R4)
// switch back to g
MOVV g_m(g), R1
MOVV m_curg(R1), g
JAL runtime·save_g(SB)
MOVV (g_sched+gobuf_sp)(g), R29
MOVV R0, (g_sched+gobuf_sp)(g)
RET
noswitch:
// already on m stack, just call directly
// Using a tail call here cleans up tracebacks since we won't stop
// at an intermediate systemstack.
MOVV 0(REGCTXT), R4 // code pointer
MOVV 0(R29), R31 // restore LR
ADDV $8, R29
JMP (R4)
// func switchToCrashStack0(fn func())
TEXT runtime·switchToCrashStack0(SB), NOSPLIT, $0-8
MOVV fn+0(FP), REGCTXT // context register
MOVV g_m(g), R2 // curm
// set g to gcrash
MOVV $runtime·gcrash(SB), g // g = &gcrash
CALL runtime·save_g(SB)
MOVV R2, g_m(g) // g.m = curm
MOVV g, m_g0(R2) // curm.g0 = g
// switch to crashstack
MOVV (g_stack+stack_hi)(g), R2
ADDV $(-4*8), R2, R29
// call target function
MOVV 0(REGCTXT), R25
JAL (R25)
// should never return
CALL runtime·abort(SB)
UNDEF
/*
* support for morestack
*/
// Called during function prolog when more stack is needed.
// Caller has already loaded:
// R1: framesize, R2: argsize, R3: LR
//
// The traceback routines see morestack on a g0 as being
// the top of a stack (for example, morestack calling newstack
// calling the scheduler calling newm calling gc), so we must
// record an argument size. For that purpose, it has no arguments.
TEXT runtime·morestack(SB),NOSPLIT|NOFRAME,$0-0
// Called from f.
// Set g->sched to context in f.
MOVV R29, (g_sched+gobuf_sp)(g)
MOVV R31, (g_sched+gobuf_pc)(g)
MOVV R3, (g_sched+gobuf_lr)(g)
MOVV REGCTXT, (g_sched+gobuf_ctxt)(g)
// Cannot grow scheduler stack (m->g0).
MOVV g_m(g), R7
MOVV m_g0(R7), R8
BNE g, R8, 3(PC)
JAL runtime·badmorestackg0(SB)
JAL runtime·abort(SB)
// Cannot grow signal stack (m->gsignal).
MOVV m_gsignal(R7), R8
BNE g, R8, 3(PC)
JAL runtime·badmorestackgsignal(SB)
JAL runtime·abort(SB)
// Called from f.
// Set m->morebuf to f's caller.
MOVV R3, (m_morebuf+gobuf_pc)(R7) // f's caller's PC
MOVV R29, (m_morebuf+gobuf_sp)(R7) // f's caller's SP
MOVV g, (m_morebuf+gobuf_g)(R7)
// Call newstack on m->g0's stack.
MOVV m_g0(R7), g
JAL runtime·save_g(SB)
MOVV (g_sched+gobuf_sp)(g), R29
// Create a stack frame on g0 to call newstack.
MOVV R0, -8(R29) // Zero saved LR in frame
ADDV $-8, R29
JAL runtime·newstack(SB)
// Not reached, but make sure the return PC from the call to newstack
// is still in this function, and not the beginning of the next.
UNDEF
TEXT runtime·morestack_noctxt(SB),NOSPLIT|NOFRAME,$0-0
// Force SPWRITE. This function doesn't actually write SP,
// but it is called with a special calling convention where
// the caller doesn't save LR on stack but passes it as a
// register (R3), and the unwinder currently doesn't understand.
// Make it SPWRITE to stop unwinding. (See issue 54332)
MOVV R29, R29
MOVV R0, REGCTXT
JMP runtime·morestack(SB)
// reflectcall: call a function with the given argument list
// func call(stackArgsType *_type, f *FuncVal, stackArgs *byte, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs).
// we don't have variable-sized frames, so we use a small number
// of constant-sized-frame functions to encode a few bits of size in the pc.
// Caution: ugly multiline assembly macros in your future!
#define DISPATCH(NAME,MAXSIZE) \
MOVV $MAXSIZE, R23; \
SGTU R1, R23, R23; \
BNE R23, 3(PC); \
MOVV $NAME(SB), R4; \
JMP (R4)
// Note: can't just "BR NAME(SB)" - bad inlining results.
TEXT ·reflectcall(SB), NOSPLIT|NOFRAME, $0-48
MOVWU frameSize+32(FP), R1
DISPATCH(runtime·call16, 16)
DISPATCH(runtime·call32, 32)
DISPATCH(runtime·call64, 64)
DISPATCH(runtime·call128, 128)
DISPATCH(runtime·call256, 256)
DISPATCH(runtime·call512, 512)
DISPATCH(runtime·call1024, 1024)
DISPATCH(runtime·call2048, 2048)
DISPATCH(runtime·call4096, 4096)
DISPATCH(runtime·call8192, 8192)
DISPATCH(runtime·call16384, 16384)
DISPATCH(runtime·call32768, 32768)
DISPATCH(runtime·call65536, 65536)
DISPATCH(runtime·call131072, 131072)
DISPATCH(runtime·call262144, 262144)
DISPATCH(runtime·call524288, 524288)
DISPATCH(runtime·call1048576, 1048576)
DISPATCH(runtime·call2097152, 2097152)
DISPATCH(runtime·call4194304, 4194304)
DISPATCH(runtime·call8388608, 8388608)
DISPATCH(runtime·call16777216, 16777216)
DISPATCH(runtime·call33554432, 33554432)
DISPATCH(runtime·call67108864, 67108864)
DISPATCH(runtime·call134217728, 134217728)
DISPATCH(runtime·call268435456, 268435456)
DISPATCH(runtime·call536870912, 536870912)
DISPATCH(runtime·call1073741824, 1073741824)
MOVV $runtime·badreflectcall(SB), R4
JMP (R4)
#define CALLFN(NAME,MAXSIZE) \
TEXT NAME(SB), WRAPPER, $MAXSIZE-48; \
NO_LOCAL_POINTERS; \
/* copy arguments to stack */ \
MOVV stackArgs+16(FP), R1; \
MOVWU stackArgsSize+24(FP), R2; \
MOVV R29, R3; \
ADDV $8, R3; \
ADDV R3, R2; \
BEQ R3, R2, 6(PC); \
MOVBU (R1), R4; \
ADDV $1, R1; \
MOVBU R4, (R3); \
ADDV $1, R3; \
JMP -5(PC); \
/* call function */ \
MOVV f+8(FP), REGCTXT; \
MOVV (REGCTXT), R4; \
PCDATA $PCDATA_StackMapIndex, $0; \
JAL (R4); \
/* copy return values back */ \
MOVV stackArgsType+0(FP), R5; \
MOVV stackArgs+16(FP), R1; \
MOVWU stackArgsSize+24(FP), R2; \
MOVWU stackRetOffset+28(FP), R4; \
ADDV $8, R29, R3; \
ADDV R4, R3; \
ADDV R4, R1; \
SUBVU R4, R2; \
JAL callRet<>(SB); \
RET
// callRet copies return values back at the end of call*. This is a
// separate function so it can allocate stack space for the arguments
// to reflectcallmove. It does not follow the Go ABI; it expects its
// arguments in registers.
TEXT callRet<>(SB), NOSPLIT, $40-0
MOVV R5, 8(R29)
MOVV R1, 16(R29)
MOVV R3, 24(R29)
MOVV R2, 32(R29)
MOVV $0, 40(R29)
JAL runtime·reflectcallmove(SB)
RET
CALLFN(·call16, 16)
CALLFN(·call32, 32)
CALLFN(·call64, 64)
CALLFN(·call128, 128)
CALLFN(·call256, 256)
CALLFN(·call512, 512)
CALLFN(·call1024, 1024)
CALLFN(·call2048, 2048)
CALLFN(·call4096, 4096)
CALLFN(·call8192, 8192)
CALLFN(·call16384, 16384)
CALLFN(·call32768, 32768)
CALLFN(·call65536, 65536)
CALLFN(·call131072, 131072)
CALLFN(·call262144, 262144)
CALLFN(·call524288, 524288)
CALLFN(·call1048576, 1048576)
CALLFN(·call2097152, 2097152)
CALLFN(·call4194304, 4194304)
CALLFN(·call8388608, 8388608)
CALLFN(·call16777216, 16777216)
CALLFN(·call33554432, 33554432)
CALLFN(·call67108864, 67108864)
CALLFN(·call134217728, 134217728)
CALLFN(·call268435456, 268435456)
CALLFN(·call536870912, 536870912)
CALLFN(·call1073741824, 1073741824)
TEXT runtime·procyield(SB),NOSPLIT,$0-0
RET
// Save state of caller into g->sched,
// but using fake PC from systemstack_switch.
// Must only be called from functions with no locals ($0)
// or else unwinding from systemstack_switch is incorrect.
// Smashes R1.
TEXT gosave_systemstack_switch<>(SB),NOSPLIT|NOFRAME,$0
MOVV $runtime·systemstack_switch(SB), R1
ADDV $8, R1 // get past prologue
MOVV R1, (g_sched+gobuf_pc)(g)
MOVV R29, (g_sched+gobuf_sp)(g)
MOVV R0, (g_sched+gobuf_lr)(g)
MOVV R0, (g_sched+gobuf_ret)(g)
// Assert ctxt is zero. See func save.
MOVV (g_sched+gobuf_ctxt)(g), R1
BEQ R1, 2(PC)
JAL runtime·abort(SB)
RET
// func asmcgocall_no_g(fn, arg unsafe.Pointer)
// Call fn(arg) aligned appropriately for the gcc ABI.
// Called on a system stack, and there may be no g yet (during needm).
TEXT ·asmcgocall_no_g(SB),NOSPLIT,$0-16
MOVV fn+0(FP), R25
MOVV arg+8(FP), R4
JAL (R25)
RET
// func asmcgocall(fn, arg unsafe.Pointer) int32
// Call fn(arg) on the scheduler stack,
// aligned appropriately for the gcc ABI.
// See cgocall.go for more details.
TEXT ·asmcgocall(SB),NOSPLIT,$0-20
MOVV fn+0(FP), R25
MOVV arg+8(FP), R4
MOVV R29, R3 // save original stack pointer
MOVV g, R2
// Figure out if we need to switch to m->g0 stack.
// We get called to create new OS threads too, and those
// come in on the m->g0 stack already. Or we might already
// be on the m->gsignal stack.
MOVV g_m(g), R5
MOVV m_gsignal(R5), R6
BEQ R6, g, g0
MOVV m_g0(R5), R6
BEQ R6, g, g0
JAL gosave_systemstack_switch<>(SB)
MOVV R6, g
JAL runtime·save_g(SB)
MOVV (g_sched+gobuf_sp)(g), R29
// Now on a scheduling stack (a pthread-created stack).
g0:
// Save room for two of our pointers.
ADDV $-16, R29
MOVV R2, 0(R29) // save old g on stack
MOVV (g_stack+stack_hi)(R2), R2
SUBVU R3, R2
MOVV R2, 8(R29) // save depth in old g stack (can't just save SP, as stack might be copied during a callback)
JAL (R25)
// Restore g, stack pointer. R2 is return value.
MOVV 0(R29), g
JAL runtime·save_g(SB)
MOVV (g_stack+stack_hi)(g), R5
MOVV 8(R29), R6
SUBVU R6, R5
MOVV R5, R29
MOVW R2, ret+16(FP)
RET
// func cgocallback(fn, frame unsafe.Pointer, ctxt uintptr)
// See cgocall.go for more details.
TEXT ·cgocallback(SB),NOSPLIT,$24-24
NO_LOCAL_POINTERS
// Skip cgocallbackg, just dropm when fn is nil, and frame is the saved g.
// It is used to dropm while thread is exiting.
MOVV fn+0(FP), R5
BNE R5, loadg
// Restore the g from frame.
MOVV frame+8(FP), g
JMP dropm
loadg:
// Load m and g from thread-local storage.
MOVB runtime·iscgo(SB), R1
BEQ R1, nocgo
JAL runtime·load_g(SB)
nocgo:
// If g is nil, Go did not create the current thread,
// or if this thread never called into Go on pthread platforms.
// Call needm to obtain one for temporary use.
// In this case, we're running on the thread stack, so there's
// lots of space, but the linker doesn't know. Hide the call from
// the linker analysis by using an indirect call.
BEQ g, needm
MOVV g_m(g), R3
MOVV R3, savedm-8(SP)
JMP havem
needm:
MOVV g, savedm-8(SP) // g is zero, so is m.
MOVV $runtime·needAndBindM(SB), R4
JAL (R4)
// Set m->sched.sp = SP, so that if a panic happens
// during the function we are about to execute, it will
// have a valid SP to run on the g0 stack.
// The next few lines (after the havem label)
// will save this SP onto the stack and then write
// the same SP back to m->sched.sp. That seems redundant,
// but if an unrecovered panic happens, unwindm will
// restore the g->sched.sp from the stack location
// and then systemstack will try to use it. If we don't set it here,
// that restored SP will be uninitialized (typically 0) and
// will not be usable.
MOVV g_m(g), R3
MOVV m_g0(R3), R1
MOVV R29, (g_sched+gobuf_sp)(R1)
havem:
// Now there's a valid m, and we're running on its m->g0.
// Save current m->g0->sched.sp on stack and then set it to SP.
// Save current sp in m->g0->sched.sp in preparation for
// switch back to m->curg stack.
// NOTE: unwindm knows that the saved g->sched.sp is at 8(R29) aka savedsp-16(SP).
MOVV m_g0(R3), R1
MOVV (g_sched+gobuf_sp)(R1), R2
MOVV R2, savedsp-24(SP) // must match frame size
MOVV R29, (g_sched+gobuf_sp)(R1)
// Switch to m->curg stack and call runtime.cgocallbackg.
// Because we are taking over the execution of m->curg
// but *not* resuming what had been running, we need to
// save that information (m->curg->sched) so we can restore it.
// We can restore m->curg->sched.sp easily, because calling
// runtime.cgocallbackg leaves SP unchanged upon return.
// To save m->curg->sched.pc, we push it onto the curg stack and
// open a frame the same size as cgocallback's g0 frame.
// Once we switch to the curg stack, the pushed PC will appear
// to be the return PC of cgocallback, so that the traceback
// will seamlessly trace back into the earlier calls.
MOVV m_curg(R3), g
JAL runtime·save_g(SB)
MOVV (g_sched+gobuf_sp)(g), R2 // prepare stack as R2
MOVV (g_sched+gobuf_pc)(g), R4
MOVV R4, -(24+8)(R2) // "saved LR"; must match frame size
// Gather our arguments into registers.
MOVV fn+0(FP), R5
MOVV frame+8(FP), R6
MOVV ctxt+16(FP), R7
MOVV $-(24+8)(R2), R29 // switch stack; must match frame size
MOVV R5, 8(R29)
MOVV R6, 16(R29)
MOVV R7, 24(R29)
JAL runtime·cgocallbackg(SB)
// Restore g->sched (== m->curg->sched) from saved values.
MOVV 0(R29), R4
MOVV R4, (g_sched+gobuf_pc)(g)
MOVV $(24+8)(R29), R2 // must match frame size
MOVV R2, (g_sched+gobuf_sp)(g)
// Switch back to m->g0's stack and restore m->g0->sched.sp.
// (Unlike m->curg, the g0 goroutine never uses sched.pc,
// so we do not have to restore it.)
MOVV g_m(g), R3
MOVV m_g0(R3), g
JAL runtime·save_g(SB)
MOVV (g_sched+gobuf_sp)(g), R29
MOVV savedsp-24(SP), R2 // must match frame size
MOVV R2, (g_sched+gobuf_sp)(g)
// If the m on entry was nil, we called needm above to borrow an m,
// 1. for the duration of the call on non-pthread platforms,
// 2. or the duration of the C thread alive on pthread platforms.
// If the m on entry wasn't nil,
// 1. the thread might be a Go thread,
// 2. or it wasn't the first call from a C thread on pthread platforms,
// since then we skip dropm to reuse the m in the first call.
MOVV savedm-8(SP), R3
BNE R3, droppedm
// Skip dropm to reuse it in the next call, when a pthread key has been created.
MOVV _cgo_pthread_key_created(SB), R3
// It means cgo is disabled when _cgo_pthread_key_created is a nil pointer, need dropm.
BEQ R3, dropm
MOVV (R3), R3
BNE R3, droppedm
dropm:
MOVV $runtime·dropm(SB), R4
JAL (R4)
droppedm:
// Done!
RET
// void setg(G*); set g. for use by needm.
TEXT runtime·setg(SB), NOSPLIT, $0-8
MOVV gg+0(FP), g
// This only happens if iscgo, so jump straight to save_g
JAL runtime·save_g(SB)
RET
// void setg_gcc(G*); set g called from gcc with g in R1
TEXT setg_gcc<>(SB),NOSPLIT,$0-0
MOVV R1, g
JAL runtime·save_g(SB)
RET
TEXT runtime·abort(SB),NOSPLIT|NOFRAME,$0-0
MOVW (R0), R0
UNDEF
// AES hashing not implemented for mips64
TEXT runtime·memhash(SB),NOSPLIT|NOFRAME,$0-32
JMP runtime·memhashFallback(SB)
TEXT runtime·strhash(SB),NOSPLIT|NOFRAME,$0-24
JMP runtime·strhashFallback(SB)
TEXT runtime·memhash32(SB),NOSPLIT|NOFRAME,$0-24
JMP runtime·memhash32Fallback(SB)
TEXT runtime·memhash64(SB),NOSPLIT|NOFRAME,$0-24
JMP runtime·memhash64Fallback(SB)
TEXT runtime·return0(SB), NOSPLIT, $0
MOVW $0, R1
RET
// Called from cgo wrappers, this function returns g->m->curg.stack.hi.
// Must obey the gcc calling convention.
TEXT _cgo_topofstack(SB),NOSPLIT,$16
// g (R30) and REGTMP (R23) might be clobbered by load_g. They
// are callee-save in the gcc calling convention, so save them.
MOVV R23, savedR23-16(SP)
MOVV g, savedG-8(SP)
JAL runtime·load_g(SB)
MOVV g_m(g), R1
MOVV m_curg(R1), R1
MOVV (g_stack+stack_hi)(R1), R2 // return value in R2
MOVV savedG-8(SP), g
MOVV savedR23-16(SP), R23
RET
// The top-most function running on a goroutine
// returns to goexit+PCQuantum.
TEXT runtime·goexit(SB),NOSPLIT|NOFRAME|TOPFRAME,$0-0
NOR R0, R0 // NOP
JAL runtime·goexit1(SB) // does not return
// traceback from goexit1 must hit code range of goexit
NOR R0, R0 // NOP
TEXT ·checkASM(SB),NOSPLIT,$0-1
MOVW $1, R1
MOVB R1, ret+0(FP)
RET
// gcWriteBarrier informs the GC about heap pointer writes.
//
// gcWriteBarrier does NOT follow the Go ABI. It accepts the
// number of bytes of buffer needed in R25, and returns a pointer
// to the buffer space in R25.
// It clobbers R23 (the linker temp register).
// The act of CALLing gcWriteBarrier will clobber R31 (LR).
// It does not clobber any other general-purpose registers,
// but may clobber others (e.g., floating point registers).
TEXT gcWriteBarrier<>(SB),NOSPLIT,$192
// Save the registers clobbered by the fast path.
MOVV R1, 184(R29)
MOVV R2, 192(R29)
retry:
MOVV g_m(g), R1
MOVV m_p(R1), R1
MOVV (p_wbBuf+wbBuf_next)(R1), R2
MOVV (p_wbBuf+wbBuf_end)(R1), R23 // R23 is linker temp register
// Increment wbBuf.next position.
ADDV R25, R2
// Is the buffer full?
SGTU R2, R23, R23
BNE R23, flush
// Commit to the larger buffer.
MOVV R2, (p_wbBuf+wbBuf_next)(R1)
// Make return value (the original next position)
SUBV R25, R2, R25
// Restore registers.
MOVV 184(R29), R1
MOVV 192(R29), R2
RET
flush:
// Save all general purpose registers since these could be
// clobbered by wbBufFlush and were not saved by the caller.
MOVV R20, 8(R29)
MOVV R21, 16(R29)
// R1 already saved
// R2 already saved
MOVV R3, 24(R29)
MOVV R4, 32(R29)
MOVV R5, 40(R29)
MOVV R6, 48(R29)
MOVV R7, 56(R29)
MOVV R8, 64(R29)
MOVV R9, 72(R29)
MOVV R10, 80(R29)
MOVV R11, 88(R29)
MOVV R12, 96(R29)
MOVV R13, 104(R29)
MOVV R14, 112(R29)
MOVV R15, 120(R29)
MOVV R16, 128(R29)
MOVV R17, 136(R29)
MOVV R18, 144(R29)
MOVV R19, 152(R29)
// R20 already saved
// R21 already saved.
MOVV R22, 160(R29)
// R23 is tmp register.
MOVV R24, 168(R29)
MOVV R25, 176(R29)
// R26 is reserved by kernel.
// R27 is reserved by kernel.
// R28 is REGSB (not modified by Go code).
// R29 is SP.
// R30 is g.
// R31 is LR, which was saved by the prologue.
CALL runtime·wbBufFlush(SB)
MOVV 8(R29), R20
MOVV 16(R29), R21
MOVV 24(R29), R3
MOVV 32(R29), R4
MOVV 40(R29), R5
MOVV 48(R29), R6
MOVV 56(R29), R7
MOVV 64(R29), R8
MOVV 72(R29), R9
MOVV 80(R29), R10
MOVV 88(R29), R11
MOVV 96(R29), R12
MOVV 104(R29), R13
MOVV 112(R29), R14
MOVV 120(R29), R15
MOVV 128(R29), R16
MOVV 136(R29), R17
MOVV 144(R29), R18
MOVV 152(R29), R19
MOVV 160(R29), R22
MOVV 168(R29), R24
MOVV 176(R29), R25
JMP retry
TEXT runtime·gcWriteBarrier1<ABIInternal>(SB),NOSPLIT,$0
MOVV $8, R25
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier2<ABIInternal>(SB),NOSPLIT,$0
MOVV $16, R25
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier3<ABIInternal>(SB),NOSPLIT,$0
MOVV $24, R25
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier4<ABIInternal>(SB),NOSPLIT,$0
MOVV $32, R25
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier5<ABIInternal>(SB),NOSPLIT,$0
MOVV $40, R25
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier6<ABIInternal>(SB),NOSPLIT,$0
MOVV $48, R25
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier7<ABIInternal>(SB),NOSPLIT,$0
MOVV $56, R25
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier8<ABIInternal>(SB),NOSPLIT,$0
MOVV $64, R25
JMP gcWriteBarrier<>(SB)
// Note: these functions use a special calling convention to save generated code space.
// Arguments are passed in registers, but the space for those arguments are allocated
// in the caller's stack frame. These stubs write the args into that stack space and
// then tail call to the corresponding runtime handler.
// The tail call makes these stubs disappear in backtraces.
TEXT runtime·panicIndex(SB),NOSPLIT,$0-16
MOVV R1, x+0(FP)
MOVV R2, y+8(FP)
JMP runtime·goPanicIndex(SB)
TEXT runtime·panicIndexU(SB),NOSPLIT,$0-16
MOVV R1, x+0(FP)
MOVV R2, y+8(FP)
JMP runtime·goPanicIndexU(SB)
TEXT runtime·panicSliceAlen(SB),NOSPLIT,$0-16
MOVV R2, x+0(FP)
MOVV R3, y+8(FP)
JMP runtime·goPanicSliceAlen(SB)
TEXT runtime·panicSliceAlenU(SB),NOSPLIT,$0-16
MOVV R2, x+0(FP)
MOVV R3, y+8(FP)
JMP runtime·goPanicSliceAlenU(SB)
TEXT runtime·panicSliceAcap(SB),NOSPLIT,$0-16
MOVV R2, x+0(FP)
MOVV R3, y+8(FP)
JMP runtime·goPanicSliceAcap(SB)
TEXT runtime·panicSliceAcapU(SB),NOSPLIT,$0-16
MOVV R2, x+0(FP)
MOVV R3, y+8(FP)
JMP runtime·goPanicSliceAcapU(SB)
TEXT runtime·panicSliceB(SB),NOSPLIT,$0-16
MOVV R1, x+0(FP)
MOVV R2, y+8(FP)
JMP runtime·goPanicSliceB(SB)
TEXT runtime·panicSliceBU(SB),NOSPLIT,$0-16
MOVV R1, x+0(FP)
MOVV R2, y+8(FP)
JMP runtime·goPanicSliceBU(SB)
TEXT runtime·panicSlice3Alen(SB),NOSPLIT,$0-16
MOVV R3, x+0(FP)
MOVV R4, y+8(FP)
JMP runtime·goPanicSlice3Alen(SB)
TEXT runtime·panicSlice3AlenU(SB),NOSPLIT,$0-16
MOVV R3, x+0(FP)
MOVV R4, y+8(FP)
JMP runtime·goPanicSlice3AlenU(SB)
TEXT runtime·panicSlice3Acap(SB),NOSPLIT,$0-16
MOVV R3, x+0(FP)
MOVV R4, y+8(FP)
JMP runtime·goPanicSlice3Acap(SB)
TEXT runtime·panicSlice3AcapU(SB),NOSPLIT,$0-16
MOVV R3, x+0(FP)
MOVV R4, y+8(FP)
JMP runtime·goPanicSlice3AcapU(SB)
TEXT runtime·panicSlice3B(SB),NOSPLIT,$0-16
MOVV R2, x+0(FP)
MOVV R3, y+8(FP)
JMP runtime·goPanicSlice3B(SB)
TEXT runtime·panicSlice3BU(SB),NOSPLIT,$0-16
MOVV R2, x+0(FP)
MOVV R3, y+8(FP)
JMP runtime·goPanicSlice3BU(SB)
TEXT runtime·panicSlice3C(SB),NOSPLIT,$0-16
MOVV R1, x+0(FP)
MOVV R2, y+8(FP)
JMP runtime·goPanicSlice3C(SB)
TEXT runtime·panicSlice3CU(SB),NOSPLIT,$0-16
MOVV R1, x+0(FP)
MOVV R2, y+8(FP)
JMP runtime·goPanicSlice3CU(SB)
TEXT runtime·panicSliceConvert(SB),NOSPLIT,$0-16
MOVV R3, x+0(FP)
MOVV R4, y+8(FP)
JMP runtime·goPanicSliceConvert(SB)

951
src/runtime/asm_mipsx.s Normal file
View File

@@ -0,0 +1,951 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build mips || mipsle
#include "go_asm.h"
#include "go_tls.h"
#include "funcdata.h"
#include "textflag.h"
#define REGCTXT R22
TEXT runtime·rt0_go(SB),NOSPLIT|TOPFRAME,$0
// R29 = stack; R4 = argc; R5 = argv
ADDU $-12, R29
MOVW R4, 4(R29) // argc
MOVW R5, 8(R29) // argv
// create istack out of the given (operating system) stack.
// _cgo_init may update stackguard.
MOVW $runtime·g0(SB), g
MOVW $(-64*1024), R23
ADD R23, R29, R1
MOVW R1, g_stackguard0(g)
MOVW R1, g_stackguard1(g)
MOVW R1, (g_stack+stack_lo)(g)
MOVW R29, (g_stack+stack_hi)(g)
// if there is a _cgo_init, call it using the gcc ABI.
MOVW _cgo_init(SB), R25
BEQ R25, nocgo
ADDU $-16, R29
MOVW R0, R7 // arg 3: not used
MOVW R0, R6 // arg 2: not used
MOVW $setg_gcc<>(SB), R5 // arg 1: setg
MOVW g, R4 // arg 0: G
JAL (R25)
ADDU $16, R29
nocgo:
// update stackguard after _cgo_init
MOVW (g_stack+stack_lo)(g), R1
ADD $const_stackGuard, R1
MOVW R1, g_stackguard0(g)
MOVW R1, g_stackguard1(g)
// set the per-goroutine and per-mach "registers"
MOVW $runtime·m0(SB), R1
// save m->g0 = g0
MOVW g, m_g0(R1)
// save m0 to g0->m
MOVW R1, g_m(g)
JAL runtime·check(SB)
// args are already prepared
JAL runtime·args(SB)
JAL runtime·osinit(SB)
JAL runtime·schedinit(SB)
// create a new goroutine to start program
MOVW $runtime·mainPC(SB), R1 // entry
ADDU $-8, R29
MOVW R1, 4(R29)
MOVW R0, 0(R29)
JAL runtime·newproc(SB)
ADDU $8, R29
// start this M
JAL runtime·mstart(SB)
UNDEF
RET
DATA runtime·mainPC+0(SB)/4,$runtime·main(SB)
GLOBL runtime·mainPC(SB),RODATA,$4
TEXT runtime·breakpoint(SB),NOSPLIT,$0-0
BREAK
RET
TEXT runtime·asminit(SB),NOSPLIT,$0-0
RET
TEXT runtime·mstart(SB),NOSPLIT|TOPFRAME,$0
JAL runtime·mstart0(SB)
RET // not reached
/*
* go-routine
*/
// void gogo(Gobuf*)
// restore state from Gobuf; longjmp
TEXT runtime·gogo(SB),NOSPLIT|NOFRAME,$0-4
MOVW buf+0(FP), R3
MOVW gobuf_g(R3), R4
MOVW 0(R4), R5 // make sure g != nil
JMP gogo<>(SB)
TEXT gogo<>(SB),NOSPLIT|NOFRAME,$0
MOVW R4, g
JAL runtime·save_g(SB)
MOVW gobuf_sp(R3), R29
MOVW gobuf_lr(R3), R31
MOVW gobuf_ret(R3), R1
MOVW gobuf_ctxt(R3), REGCTXT
MOVW R0, gobuf_sp(R3)
MOVW R0, gobuf_ret(R3)
MOVW R0, gobuf_lr(R3)
MOVW R0, gobuf_ctxt(R3)
MOVW gobuf_pc(R3), R4
JMP (R4)
// void mcall(fn func(*g))
// Switch to m->g0's stack, call fn(g).
// Fn must never return. It should gogo(&g->sched)
// to keep running g.
TEXT runtime·mcall(SB),NOSPLIT|NOFRAME,$0-4
// Save caller state in g->sched
MOVW R29, (g_sched+gobuf_sp)(g)
MOVW R31, (g_sched+gobuf_pc)(g)
MOVW R0, (g_sched+gobuf_lr)(g)
// Switch to m->g0 & its stack, call fn.
MOVW g, R1
MOVW g_m(g), R3
MOVW m_g0(R3), g
JAL runtime·save_g(SB)
BNE g, R1, 2(PC)
JMP runtime·badmcall(SB)
MOVW fn+0(FP), REGCTXT // context
MOVW 0(REGCTXT), R4 // code pointer
MOVW (g_sched+gobuf_sp)(g), R29 // sp = m->g0->sched.sp
ADDU $-8, R29 // make room for 1 arg and fake LR
MOVW R1, 4(R29)
MOVW R0, 0(R29)
JAL (R4)
JMP runtime·badmcall2(SB)
// systemstack_switch is a dummy routine that systemstack leaves at the bottom
// of the G stack. We need to distinguish the routine that
// lives at the bottom of the G stack from the one that lives
// at the top of the system stack because the one at the top of
// the system stack terminates the stack walk (see topofstack()).
TEXT runtime·systemstack_switch(SB),NOSPLIT,$0-0
UNDEF
JAL (R31) // make sure this function is not leaf
RET
// func systemstack(fn func())
TEXT runtime·systemstack(SB),NOSPLIT,$0-4
MOVW fn+0(FP), R1 // R1 = fn
MOVW R1, REGCTXT // context
MOVW g_m(g), R2 // R2 = m
MOVW m_gsignal(R2), R3 // R3 = gsignal
BEQ g, R3, noswitch
MOVW m_g0(R2), R3 // R3 = g0
BEQ g, R3, noswitch
MOVW m_curg(R2), R4
BEQ g, R4, switch
// Bad: g is not gsignal, not g0, not curg. What is it?
// Hide call from linker nosplit analysis.
MOVW $runtime·badsystemstack(SB), R4
JAL (R4)
JAL runtime·abort(SB)
switch:
// save our state in g->sched. Pretend to
// be systemstack_switch if the G stack is scanned.
JAL gosave_systemstack_switch<>(SB)
// switch to g0
MOVW R3, g
JAL runtime·save_g(SB)
MOVW (g_sched+gobuf_sp)(g), R1
MOVW R1, R29
// call target function
MOVW 0(REGCTXT), R4 // code pointer
JAL (R4)
// switch back to g
MOVW g_m(g), R1
MOVW m_curg(R1), g
JAL runtime·save_g(SB)
MOVW (g_sched+gobuf_sp)(g), R29
MOVW R0, (g_sched+gobuf_sp)(g)
RET
noswitch:
// already on m stack, just call directly
// Using a tail call here cleans up tracebacks since we won't stop
// at an intermediate systemstack.
MOVW 0(REGCTXT), R4 // code pointer
MOVW 0(R29), R31 // restore LR
ADD $4, R29
JMP (R4)
// func switchToCrashStack0(fn func())
TEXT runtime·switchToCrashStack0(SB), NOSPLIT, $0-4
MOVW fn+0(FP), REGCTXT // context register
MOVW g_m(g), R2 // curm
// set g to gcrash
MOVW $runtime·gcrash(SB), g // g = &gcrash
CALL runtime·save_g(SB)
MOVW R2, g_m(g) // g.m = curm
MOVW g, m_g0(R2) // curm.g0 = g
// switch to crashstack
MOVW (g_stack+stack_hi)(g), R2
ADDU $(-4*8), R2, R29
// call target function
MOVW 0(REGCTXT), R25
JAL (R25)
// should never return
CALL runtime·abort(SB)
UNDEF
/*
* support for morestack
*/
// Called during function prolog when more stack is needed.
// Caller has already loaded:
// R1: framesize, R2: argsize, R3: LR
//
// The traceback routines see morestack on a g0 as being
// the top of a stack (for example, morestack calling newstack
// calling the scheduler calling newm calling gc), so we must
// record an argument size. For that purpose, it has no arguments.
TEXT runtime·morestack(SB),NOSPLIT|NOFRAME,$0-0
// Called from f.
// Set g->sched to context in f.
MOVW R29, (g_sched+gobuf_sp)(g)
MOVW R31, (g_sched+gobuf_pc)(g)
MOVW R3, (g_sched+gobuf_lr)(g)
MOVW REGCTXT, (g_sched+gobuf_ctxt)(g)
// Cannot grow scheduler stack (m->g0).
MOVW g_m(g), R7
MOVW m_g0(R7), R8
BNE g, R8, 3(PC)
JAL runtime·badmorestackg0(SB)
JAL runtime·abort(SB)
// Cannot grow signal stack (m->gsignal).
MOVW m_gsignal(R7), R8
BNE g, R8, 3(PC)
JAL runtime·badmorestackgsignal(SB)
JAL runtime·abort(SB)
// Called from f.
// Set m->morebuf to f's caller.
MOVW R3, (m_morebuf+gobuf_pc)(R7) // f's caller's PC
MOVW R29, (m_morebuf+gobuf_sp)(R7) // f's caller's SP
MOVW g, (m_morebuf+gobuf_g)(R7)
// Call newstack on m->g0's stack.
MOVW m_g0(R7), g
JAL runtime·save_g(SB)
MOVW (g_sched+gobuf_sp)(g), R29
// Create a stack frame on g0 to call newstack.
MOVW R0, -4(R29) // Zero saved LR in frame
ADDU $-4, R29
JAL runtime·newstack(SB)
// Not reached, but make sure the return PC from the call to newstack
// is still in this function, and not the beginning of the next.
UNDEF
TEXT runtime·morestack_noctxt(SB),NOSPLIT,$0-0
// Force SPWRITE. This function doesn't actually write SP,
// but it is called with a special calling convention where
// the caller doesn't save LR on stack but passes it as a
// register (R3), and the unwinder currently doesn't understand.
// Make it SPWRITE to stop unwinding. (See issue 54332)
MOVW R29, R29
MOVW R0, REGCTXT
JMP runtime·morestack(SB)
// reflectcall: call a function with the given argument list
// func call(stackArgsType *_type, f *FuncVal, stackArgs *byte, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs).
// we don't have variable-sized frames, so we use a small number
// of constant-sized-frame functions to encode a few bits of size in the pc.
#define DISPATCH(NAME,MAXSIZE) \
MOVW $MAXSIZE, R23; \
SGTU R1, R23, R23; \
BNE R23, 3(PC); \
MOVW $NAME(SB), R4; \
JMP (R4)
TEXT ·reflectcall(SB),NOSPLIT|NOFRAME,$0-28
MOVW frameSize+20(FP), R1
DISPATCH(runtime·call16, 16)
DISPATCH(runtime·call32, 32)
DISPATCH(runtime·call64, 64)
DISPATCH(runtime·call128, 128)
DISPATCH(runtime·call256, 256)
DISPATCH(runtime·call512, 512)
DISPATCH(runtime·call1024, 1024)
DISPATCH(runtime·call2048, 2048)
DISPATCH(runtime·call4096, 4096)
DISPATCH(runtime·call8192, 8192)
DISPATCH(runtime·call16384, 16384)
DISPATCH(runtime·call32768, 32768)
DISPATCH(runtime·call65536, 65536)
DISPATCH(runtime·call131072, 131072)
DISPATCH(runtime·call262144, 262144)
DISPATCH(runtime·call524288, 524288)
DISPATCH(runtime·call1048576, 1048576)
DISPATCH(runtime·call2097152, 2097152)
DISPATCH(runtime·call4194304, 4194304)
DISPATCH(runtime·call8388608, 8388608)
DISPATCH(runtime·call16777216, 16777216)
DISPATCH(runtime·call33554432, 33554432)
DISPATCH(runtime·call67108864, 67108864)
DISPATCH(runtime·call134217728, 134217728)
DISPATCH(runtime·call268435456, 268435456)
DISPATCH(runtime·call536870912, 536870912)
DISPATCH(runtime·call1073741824, 1073741824)
MOVW $runtime·badreflectcall(SB), R4
JMP (R4)
#define CALLFN(NAME,MAXSIZE) \
TEXT NAME(SB),WRAPPER,$MAXSIZE-28; \
NO_LOCAL_POINTERS; \
/* copy arguments to stack */ \
MOVW stackArgs+8(FP), R1; \
MOVW stackArgsSize+12(FP), R2; \
MOVW R29, R3; \
ADDU $4, R3; \
ADDU R3, R2; \
BEQ R3, R2, 6(PC); \
MOVBU (R1), R4; \
ADDU $1, R1; \
MOVBU R4, (R3); \
ADDU $1, R3; \
JMP -5(PC); \
/* call function */ \
MOVW f+4(FP), REGCTXT; \
MOVW (REGCTXT), R4; \
PCDATA $PCDATA_StackMapIndex, $0; \
JAL (R4); \
/* copy return values back */ \
MOVW stackArgsType+0(FP), R5; \
MOVW stackArgs+8(FP), R1; \
MOVW stackArgsSize+12(FP), R2; \
MOVW stackRetOffset+16(FP), R4; \
ADDU $4, R29, R3; \
ADDU R4, R3; \
ADDU R4, R1; \
SUBU R4, R2; \
JAL callRet<>(SB); \
RET
// callRet copies return values back at the end of call*. This is a
// separate function so it can allocate stack space for the arguments
// to reflectcallmove. It does not follow the Go ABI; it expects its
// arguments in registers.
TEXT callRet<>(SB), NOSPLIT, $20-0
MOVW R5, 4(R29)
MOVW R1, 8(R29)
MOVW R3, 12(R29)
MOVW R2, 16(R29)
MOVW $0, 20(R29)
JAL runtime·reflectcallmove(SB)
RET
CALLFN(·call16, 16)
CALLFN(·call32, 32)
CALLFN(·call64, 64)
CALLFN(·call128, 128)
CALLFN(·call256, 256)
CALLFN(·call512, 512)
CALLFN(·call1024, 1024)
CALLFN(·call2048, 2048)
CALLFN(·call4096, 4096)
CALLFN(·call8192, 8192)
CALLFN(·call16384, 16384)
CALLFN(·call32768, 32768)
CALLFN(·call65536, 65536)
CALLFN(·call131072, 131072)
CALLFN(·call262144, 262144)
CALLFN(·call524288, 524288)
CALLFN(·call1048576, 1048576)
CALLFN(·call2097152, 2097152)
CALLFN(·call4194304, 4194304)
CALLFN(·call8388608, 8388608)
CALLFN(·call16777216, 16777216)
CALLFN(·call33554432, 33554432)
CALLFN(·call67108864, 67108864)
CALLFN(·call134217728, 134217728)
CALLFN(·call268435456, 268435456)
CALLFN(·call536870912, 536870912)
CALLFN(·call1073741824, 1073741824)
TEXT runtime·procyield(SB),NOSPLIT,$0-4
RET
// Save state of caller into g->sched,
// but using fake PC from systemstack_switch.
// Must only be called from functions with no locals ($0)
// or else unwinding from systemstack_switch is incorrect.
// Smashes R1.
TEXT gosave_systemstack_switch<>(SB),NOSPLIT|NOFRAME,$0
MOVW $runtime·systemstack_switch(SB), R1
ADDU $8, R1 // get past prologue
MOVW R1, (g_sched+gobuf_pc)(g)
MOVW R29, (g_sched+gobuf_sp)(g)
MOVW R0, (g_sched+gobuf_lr)(g)
MOVW R0, (g_sched+gobuf_ret)(g)
// Assert ctxt is zero. See func save.
MOVW (g_sched+gobuf_ctxt)(g), R1
BEQ R1, 2(PC)
JAL runtime·abort(SB)
RET
// func asmcgocall(fn, arg unsafe.Pointer) int32
// Call fn(arg) on the scheduler stack,
// aligned appropriately for the gcc ABI.
// See cgocall.go for more details.
TEXT ·asmcgocall(SB),NOSPLIT,$0-12
MOVW fn+0(FP), R25
MOVW arg+4(FP), R4
MOVW R29, R3 // save original stack pointer
MOVW g, R2
// Figure out if we need to switch to m->g0 stack.
// We get called to create new OS threads too, and those
// come in on the m->g0 stack already. Or we might already
// be on the m->gsignal stack.
MOVW g_m(g), R5
MOVW m_gsignal(R5), R6
BEQ R6, g, g0
MOVW m_g0(R5), R6
BEQ R6, g, g0
JAL gosave_systemstack_switch<>(SB)
MOVW R6, g
JAL runtime·save_g(SB)
MOVW (g_sched+gobuf_sp)(g), R29
// Now on a scheduling stack (a pthread-created stack).
g0:
// Save room for two of our pointers and O32 frame.
ADDU $-24, R29
AND $~7, R29 // O32 ABI expects 8-byte aligned stack on function entry
MOVW R2, 16(R29) // save old g on stack
MOVW (g_stack+stack_hi)(R2), R2
SUBU R3, R2
MOVW R2, 20(R29) // save depth in old g stack (can't just save SP, as stack might be copied during a callback)
JAL (R25)
// Restore g, stack pointer. R2 is return value.
MOVW 16(R29), g
JAL runtime·save_g(SB)
MOVW (g_stack+stack_hi)(g), R5
MOVW 20(R29), R6
SUBU R6, R5
MOVW R5, R29
MOVW R2, ret+8(FP)
RET
// cgocallback(fn, frame unsafe.Pointer, ctxt uintptr)
// See cgocall.go for more details.
TEXT ·cgocallback(SB),NOSPLIT,$12-12
NO_LOCAL_POINTERS
// Skip cgocallbackg, just dropm when fn is nil, and frame is the saved g.
// It is used to dropm while thread is exiting.
MOVW fn+0(FP), R5
BNE R5, loadg
// Restore the g from frame.
MOVW frame+4(FP), g
JMP dropm
loadg:
// Load m and g from thread-local storage.
MOVB runtime·iscgo(SB), R1
BEQ R1, nocgo
JAL runtime·load_g(SB)
nocgo:
// If g is nil, Go did not create the current thread,
// or if this thread never called into Go on pthread platforms.
// Call needm to obtain one for temporary use.
// In this case, we're running on the thread stack, so there's
// lots of space, but the linker doesn't know. Hide the call from
// the linker analysis by using an indirect call.
BEQ g, needm
MOVW g_m(g), R3
MOVW R3, savedm-4(SP)
JMP havem
needm:
MOVW g, savedm-4(SP) // g is zero, so is m.
MOVW $runtime·needAndBindM(SB), R4
JAL (R4)
// Set m->sched.sp = SP, so that if a panic happens
// during the function we are about to execute, it will
// have a valid SP to run on the g0 stack.
// The next few lines (after the havem label)
// will save this SP onto the stack and then write
// the same SP back to m->sched.sp. That seems redundant,
// but if an unrecovered panic happens, unwindm will
// restore the g->sched.sp from the stack location
// and then systemstack will try to use it. If we don't set it here,
// that restored SP will be uninitialized (typically 0) and
// will not be usable.
MOVW g_m(g), R3
MOVW m_g0(R3), R1
MOVW R29, (g_sched+gobuf_sp)(R1)
havem:
// Now there's a valid m, and we're running on its m->g0.
// Save current m->g0->sched.sp on stack and then set it to SP.
// Save current sp in m->g0->sched.sp in preparation for
// switch back to m->curg stack.
// NOTE: unwindm knows that the saved g->sched.sp is at 4(R29) aka savedsp-8(SP).
MOVW m_g0(R3), R1
MOVW (g_sched+gobuf_sp)(R1), R2
MOVW R2, savedsp-12(SP) // must match frame size
MOVW R29, (g_sched+gobuf_sp)(R1)
// Switch to m->curg stack and call runtime.cgocallbackg.
// Because we are taking over the execution of m->curg
// but *not* resuming what had been running, we need to
// save that information (m->curg->sched) so we can restore it.
// We can restore m->curg->sched.sp easily, because calling
// runtime.cgocallbackg leaves SP unchanged upon return.
// To save m->curg->sched.pc, we push it onto the curg stack and
// open a frame the same size as cgocallback's g0 frame.
// Once we switch to the curg stack, the pushed PC will appear
// to be the return PC of cgocallback, so that the traceback
// will seamlessly trace back into the earlier calls.
MOVW m_curg(R3), g
JAL runtime·save_g(SB)
MOVW (g_sched+gobuf_sp)(g), R2 // prepare stack as R2
MOVW (g_sched+gobuf_pc)(g), R4
MOVW R4, -(12+4)(R2) // "saved LR"; must match frame size
// Gather our arguments into registers.
MOVW fn+0(FP), R5
MOVW frame+4(FP), R6
MOVW ctxt+8(FP), R7
MOVW $-(12+4)(R2), R29 // switch stack; must match frame size
MOVW R5, 4(R29)
MOVW R6, 8(R29)
MOVW R7, 12(R29)
JAL runtime·cgocallbackg(SB)
// Restore g->sched (== m->curg->sched) from saved values.
MOVW 0(R29), R4
MOVW R4, (g_sched+gobuf_pc)(g)
MOVW $(12+4)(R29), R2 // must match frame size
MOVW R2, (g_sched+gobuf_sp)(g)
// Switch back to m->g0's stack and restore m->g0->sched.sp.
// (Unlike m->curg, the g0 goroutine never uses sched.pc,
// so we do not have to restore it.)
MOVW g_m(g), R3
MOVW m_g0(R3), g
JAL runtime·save_g(SB)
MOVW (g_sched+gobuf_sp)(g), R29
MOVW savedsp-12(SP), R2 // must match frame size
MOVW R2, (g_sched+gobuf_sp)(g)
// If the m on entry was nil, we called needm above to borrow an m,
// 1. for the duration of the call on non-pthread platforms,
// 2. or the duration of the C thread alive on pthread platforms.
// If the m on entry wasn't nil,
// 1. the thread might be a Go thread,
// 2. or it wasn't the first call from a C thread on pthread platforms,
// since then we skip dropm to reuse the m in the first call.
MOVW savedm-4(SP), R3
BNE R3, droppedm
// Skip dropm to reuse it in the next call, when a pthread key has been created.
MOVW _cgo_pthread_key_created(SB), R3
// It means cgo is disabled when _cgo_pthread_key_created is a nil pointer, need dropm.
BEQ R3, dropm
MOVW (R3), R3
BNE R3, droppedm
dropm:
MOVW $runtime·dropm(SB), R4
JAL (R4)
droppedm:
// Done!
RET
// void setg(G*); set g. for use by needm.
// This only happens if iscgo, so jump straight to save_g
TEXT runtime·setg(SB),NOSPLIT,$0-4
MOVW gg+0(FP), g
JAL runtime·save_g(SB)
RET
// void setg_gcc(G*); set g in C TLS.
// Must obey the gcc calling convention.
TEXT setg_gcc<>(SB),NOSPLIT,$0
MOVW R4, g
JAL runtime·save_g(SB)
RET
TEXT runtime·abort(SB),NOSPLIT,$0-0
UNDEF
// AES hashing not implemented for mips
TEXT runtime·memhash(SB),NOSPLIT|NOFRAME,$0-16
JMP runtime·memhashFallback(SB)
TEXT runtime·strhash(SB),NOSPLIT|NOFRAME,$0-12
JMP runtime·strhashFallback(SB)
TEXT runtime·memhash32(SB),NOSPLIT|NOFRAME,$0-12
JMP runtime·memhash32Fallback(SB)
TEXT runtime·memhash64(SB),NOSPLIT|NOFRAME,$0-12
JMP runtime·memhash64Fallback(SB)
TEXT runtime·return0(SB),NOSPLIT,$0
MOVW $0, R1
RET
// Called from cgo wrappers, this function returns g->m->curg.stack.hi.
// Must obey the gcc calling convention.
TEXT _cgo_topofstack(SB),NOSPLIT|NOFRAME,$0
// g (R30), R3 and REGTMP (R23) might be clobbered by load_g. R30 and R23
// are callee-save in the gcc calling convention, so save them.
MOVW R23, R8
MOVW g, R9
MOVW R31, R10 // this call frame does not save LR
JAL runtime·load_g(SB)
MOVW g_m(g), R1
MOVW m_curg(R1), R1
MOVW (g_stack+stack_hi)(R1), R2 // return value in R2
MOVW R8, R23
MOVW R9, g
MOVW R10, R31
RET
// The top-most function running on a goroutine
// returns to goexit+PCQuantum.
TEXT runtime·goexit(SB),NOSPLIT|NOFRAME|TOPFRAME,$0-0
NOR R0, R0 // NOP
JAL runtime·goexit1(SB) // does not return
// traceback from goexit1 must hit code range of goexit
NOR R0, R0 // NOP
TEXT ·checkASM(SB),NOSPLIT,$0-1
MOVW $1, R1
MOVB R1, ret+0(FP)
RET
// gcWriteBarrier informs the GC about heap pointer writes.
//
// gcWriteBarrier does NOT follow the Go ABI. It accepts the
// number of bytes of buffer needed in R25, and returns a pointer
// to the buffer space in R25.
// It clobbers R23 (the linker temp register).
// The act of CALLing gcWriteBarrier will clobber R31 (LR).
// It does not clobber any other general-purpose registers,
// but may clobber others (e.g., floating point registers).
TEXT gcWriteBarrier<>(SB),NOSPLIT,$104
// Save the registers clobbered by the fast path.
MOVW R1, 100(R29)
MOVW R2, 104(R29)
retry:
MOVW g_m(g), R1
MOVW m_p(R1), R1
MOVW (p_wbBuf+wbBuf_next)(R1), R2
MOVW (p_wbBuf+wbBuf_end)(R1), R23 // R23 is linker temp register
// Increment wbBuf.next position.
ADD R25, R2
// Is the buffer full?
SGTU R2, R23, R23
BNE R23, flush
// Commit to the larger buffer.
MOVW R2, (p_wbBuf+wbBuf_next)(R1)
// Make return value (the original next position)
SUB R25, R2, R25
// Restore registers.
MOVW 100(R29), R1
MOVW 104(R29), R2
RET
flush:
// Save all general purpose registers since these could be
// clobbered by wbBufFlush and were not saved by the caller.
MOVW R20, 4(R29)
MOVW R21, 8(R29)
// R1 already saved
// R2 already saved
MOVW R3, 12(R29)
MOVW R4, 16(R29)
MOVW R5, 20(R29)
MOVW R6, 24(R29)
MOVW R7, 28(R29)
MOVW R8, 32(R29)
MOVW R9, 36(R29)
MOVW R10, 40(R29)
MOVW R11, 44(R29)
MOVW R12, 48(R29)
MOVW R13, 52(R29)
MOVW R14, 56(R29)
MOVW R15, 60(R29)
MOVW R16, 64(R29)
MOVW R17, 68(R29)
MOVW R18, 72(R29)
MOVW R19, 76(R29)
MOVW R20, 80(R29)
// R21 already saved
// R22 already saved.
MOVW R22, 84(R29)
// R23 is tmp register.
MOVW R24, 88(R29)
MOVW R25, 92(R29)
// R26 is reserved by kernel.
// R27 is reserved by kernel.
MOVW R28, 96(R29)
// R29 is SP.
// R30 is g.
// R31 is LR, which was saved by the prologue.
CALL runtime·wbBufFlush(SB)
MOVW 4(R29), R20
MOVW 8(R29), R21
MOVW 12(R29), R3
MOVW 16(R29), R4
MOVW 20(R29), R5
MOVW 24(R29), R6
MOVW 28(R29), R7
MOVW 32(R29), R8
MOVW 36(R29), R9
MOVW 40(R29), R10
MOVW 44(R29), R11
MOVW 48(R29), R12
MOVW 52(R29), R13
MOVW 56(R29), R14
MOVW 60(R29), R15
MOVW 64(R29), R16
MOVW 68(R29), R17
MOVW 72(R29), R18
MOVW 76(R29), R19
MOVW 80(R29), R20
MOVW 84(R29), R22
MOVW 88(R29), R24
MOVW 92(R29), R25
MOVW 96(R29), R28
JMP retry
TEXT runtime·gcWriteBarrier1<ABIInternal>(SB),NOSPLIT,$0
MOVW $4, R25
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier2<ABIInternal>(SB),NOSPLIT,$0
MOVW $8, R25
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier3<ABIInternal>(SB),NOSPLIT,$0
MOVW $12, R25
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier4<ABIInternal>(SB),NOSPLIT,$0
MOVW $16, R25
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier5<ABIInternal>(SB),NOSPLIT,$0
MOVW $20, R25
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier6<ABIInternal>(SB),NOSPLIT,$0
MOVW $24, R25
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier7<ABIInternal>(SB),NOSPLIT,$0
MOVW $28, R25
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier8<ABIInternal>(SB),NOSPLIT,$0
MOVW $32, R25
JMP gcWriteBarrier<>(SB)
// Note: these functions use a special calling convention to save generated code space.
// Arguments are passed in registers, but the space for those arguments are allocated
// in the caller's stack frame. These stubs write the args into that stack space and
// then tail call to the corresponding runtime handler.
// The tail call makes these stubs disappear in backtraces.
TEXT runtime·panicIndex(SB),NOSPLIT,$0-8
MOVW R1, x+0(FP)
MOVW R2, y+4(FP)
JMP runtime·goPanicIndex(SB)
TEXT runtime·panicIndexU(SB),NOSPLIT,$0-8
MOVW R1, x+0(FP)
MOVW R2, y+4(FP)
JMP runtime·goPanicIndexU(SB)
TEXT runtime·panicSliceAlen(SB),NOSPLIT,$0-8
MOVW R2, x+0(FP)
MOVW R3, y+4(FP)
JMP runtime·goPanicSliceAlen(SB)
TEXT runtime·panicSliceAlenU(SB),NOSPLIT,$0-8
MOVW R2, x+0(FP)
MOVW R3, y+4(FP)
JMP runtime·goPanicSliceAlenU(SB)
TEXT runtime·panicSliceAcap(SB),NOSPLIT,$0-8
MOVW R2, x+0(FP)
MOVW R3, y+4(FP)
JMP runtime·goPanicSliceAcap(SB)
TEXT runtime·panicSliceAcapU(SB),NOSPLIT,$0-8
MOVW R2, x+0(FP)
MOVW R3, y+4(FP)
JMP runtime·goPanicSliceAcapU(SB)
TEXT runtime·panicSliceB(SB),NOSPLIT,$0-8
MOVW R1, x+0(FP)
MOVW R2, y+4(FP)
JMP runtime·goPanicSliceB(SB)
TEXT runtime·panicSliceBU(SB),NOSPLIT,$0-8
MOVW R1, x+0(FP)
MOVW R2, y+4(FP)
JMP runtime·goPanicSliceBU(SB)
TEXT runtime·panicSlice3Alen(SB),NOSPLIT,$0-8
MOVW R3, x+0(FP)
MOVW R4, y+4(FP)
JMP runtime·goPanicSlice3Alen(SB)
TEXT runtime·panicSlice3AlenU(SB),NOSPLIT,$0-8
MOVW R3, x+0(FP)
MOVW R4, y+4(FP)
JMP runtime·goPanicSlice3AlenU(SB)
TEXT runtime·panicSlice3Acap(SB),NOSPLIT,$0-8
MOVW R3, x+0(FP)
MOVW R4, y+4(FP)
JMP runtime·goPanicSlice3Acap(SB)
TEXT runtime·panicSlice3AcapU(SB),NOSPLIT,$0-8
MOVW R3, x+0(FP)
MOVW R4, y+4(FP)
JMP runtime·goPanicSlice3AcapU(SB)
TEXT runtime·panicSlice3B(SB),NOSPLIT,$0-8
MOVW R2, x+0(FP)
MOVW R3, y+4(FP)
JMP runtime·goPanicSlice3B(SB)
TEXT runtime·panicSlice3BU(SB),NOSPLIT,$0-8
MOVW R2, x+0(FP)
MOVW R3, y+4(FP)
JMP runtime·goPanicSlice3BU(SB)
TEXT runtime·panicSlice3C(SB),NOSPLIT,$0-8
MOVW R1, x+0(FP)
MOVW R2, y+4(FP)
JMP runtime·goPanicSlice3C(SB)
TEXT runtime·panicSlice3CU(SB),NOSPLIT,$0-8
MOVW R1, x+0(FP)
MOVW R2, y+4(FP)
JMP runtime·goPanicSlice3CU(SB)
TEXT runtime·panicSliceConvert(SB),NOSPLIT,$0-8
MOVW R3, x+0(FP)
MOVW R4, y+4(FP)
JMP runtime·goPanicSliceConvert(SB)
// Extended versions for 64-bit indexes.
TEXT runtime·panicExtendIndex(SB),NOSPLIT,$0-12
MOVW R5, hi+0(FP)
MOVW R1, lo+4(FP)
MOVW R2, y+8(FP)
JMP runtime·goPanicExtendIndex(SB)
TEXT runtime·panicExtendIndexU(SB),NOSPLIT,$0-12
MOVW R5, hi+0(FP)
MOVW R1, lo+4(FP)
MOVW R2, y+8(FP)
JMP runtime·goPanicExtendIndexU(SB)
TEXT runtime·panicExtendSliceAlen(SB),NOSPLIT,$0-12
MOVW R5, hi+0(FP)
MOVW R2, lo+4(FP)
MOVW R3, y+8(FP)
JMP runtime·goPanicExtendSliceAlen(SB)
TEXT runtime·panicExtendSliceAlenU(SB),NOSPLIT,$0-12
MOVW R5, hi+0(FP)
MOVW R2, lo+4(FP)
MOVW R3, y+8(FP)
JMP runtime·goPanicExtendSliceAlenU(SB)
TEXT runtime·panicExtendSliceAcap(SB),NOSPLIT,$0-12
MOVW R5, hi+0(FP)
MOVW R2, lo+4(FP)
MOVW R3, y+8(FP)
JMP runtime·goPanicExtendSliceAcap(SB)
TEXT runtime·panicExtendSliceAcapU(SB),NOSPLIT,$0-12
MOVW R5, hi+0(FP)
MOVW R2, lo+4(FP)
MOVW R3, y+8(FP)
JMP runtime·goPanicExtendSliceAcapU(SB)
TEXT runtime·panicExtendSliceB(SB),NOSPLIT,$0-12
MOVW R5, hi+0(FP)
MOVW R1, lo+4(FP)
MOVW R2, y+8(FP)
JMP runtime·goPanicExtendSliceB(SB)
TEXT runtime·panicExtendSliceBU(SB),NOSPLIT,$0-12
MOVW R5, hi+0(FP)
MOVW R1, lo+4(FP)
MOVW R2, y+8(FP)
JMP runtime·goPanicExtendSliceBU(SB)
TEXT runtime·panicExtendSlice3Alen(SB),NOSPLIT,$0-12
MOVW R5, hi+0(FP)
MOVW R3, lo+4(FP)
MOVW R4, y+8(FP)
JMP runtime·goPanicExtendSlice3Alen(SB)
TEXT runtime·panicExtendSlice3AlenU(SB),NOSPLIT,$0-12
MOVW R5, hi+0(FP)
MOVW R3, lo+4(FP)
MOVW R4, y+8(FP)
JMP runtime·goPanicExtendSlice3AlenU(SB)
TEXT runtime·panicExtendSlice3Acap(SB),NOSPLIT,$0-12
MOVW R5, hi+0(FP)
MOVW R3, lo+4(FP)
MOVW R4, y+8(FP)
JMP runtime·goPanicExtendSlice3Acap(SB)
TEXT runtime·panicExtendSlice3AcapU(SB),NOSPLIT,$0-12
MOVW R5, hi+0(FP)
MOVW R3, lo+4(FP)
MOVW R4, y+8(FP)
JMP runtime·goPanicExtendSlice3AcapU(SB)
TEXT runtime·panicExtendSlice3B(SB),NOSPLIT,$0-12
MOVW R5, hi+0(FP)
MOVW R2, lo+4(FP)
MOVW R3, y+8(FP)
JMP runtime·goPanicExtendSlice3B(SB)
TEXT runtime·panicExtendSlice3BU(SB),NOSPLIT,$0-12
MOVW R5, hi+0(FP)
MOVW R2, lo+4(FP)
MOVW R3, y+8(FP)
JMP runtime·goPanicExtendSlice3BU(SB)
TEXT runtime·panicExtendSlice3C(SB),NOSPLIT,$0-12
MOVW R5, hi+0(FP)
MOVW R1, lo+4(FP)
MOVW R2, y+8(FP)
JMP runtime·goPanicExtendSlice3C(SB)
TEXT runtime·panicExtendSlice3CU(SB),NOSPLIT,$0-12
MOVW R5, hi+0(FP)
MOVW R1, lo+4(FP)
MOVW R2, y+8(FP)
JMP runtime·goPanicExtendSlice3CU(SB)

55
src/runtime/asm_ppc64x.h Normal file
View File

@@ -0,0 +1,55 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// FIXED_FRAME defines the size of the fixed part of a stack frame. A stack
// frame looks like this:
//
// +---------------------+
// | local variable area |
// +---------------------+
// | argument area |
// +---------------------+ <- R1+FIXED_FRAME
// | fixed area |
// +---------------------+ <- R1
//
// So a function that sets up a stack frame at all uses as least FIXED_FRAME
// bytes of stack. This mostly affects assembly that calls other functions
// with arguments (the arguments should be stored at FIXED_FRAME+0(R1),
// FIXED_FRAME+8(R1) etc) and some other low-level places.
//
// The reason for using a constant is to make supporting PIC easier (although
// we only support PIC on ppc64le which has a minimum 32 bytes of stack frame,
// and currently always use that much, PIC on ppc64 would need to use 48).
#define FIXED_FRAME 32
// aix/ppc64 uses XCOFF which uses function descriptors.
// AIX cannot perform the TOC relocation in a text section.
// Therefore, these descriptors must live in a data section.
#ifdef GOOS_aix
#ifdef GOARCH_ppc64
#define GO_PPC64X_HAS_FUNCDESC
#define DEFINE_PPC64X_FUNCDESC(funcname, localfuncname) \
DATA funcname+0(SB)/8, $localfuncname(SB) \
DATA funcname+8(SB)/8, $TOC(SB) \
DATA funcname+16(SB)/8, $0 \
GLOBL funcname(SB), NOPTR, $24
#endif
#endif
// linux/ppc64 uses ELFv1 which uses function descriptors.
// These must also look like ABI0 functions on linux/ppc64
// to work with abi.FuncPCABI0(sigtramp) in os_linux.go.
// Only static codegen is supported on linux/ppc64, so TOC
// is not needed.
#ifdef GOOS_linux
#ifdef GOARCH_ppc64
#define GO_PPC64X_HAS_FUNCDESC
#define DEFINE_PPC64X_FUNCDESC(funcname, localfuncname) \
TEXT funcname(SB),NOSPLIT|NOFRAME,$0 \
DWORD $localfuncname(SB) \
DWORD $0 \
DWORD $0
#endif
#endif

1622
src/runtime/asm_ppc64x.s Normal file

File diff suppressed because it is too large Load Diff

963
src/runtime/asm_riscv64.s Normal file
View File

@@ -0,0 +1,963 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include "go_asm.h"
#include "funcdata.h"
#include "textflag.h"
// func rt0_go()
TEXT runtime·rt0_go(SB),NOSPLIT|TOPFRAME,$0
// X2 = stack; A0 = argc; A1 = argv
SUB $24, X2
MOV A0, 8(X2) // argc
MOV A1, 16(X2) // argv
// create istack out of the given (operating system) stack.
// _cgo_init may update stackguard.
MOV $runtime·g0(SB), g
MOV $(-64*1024), T0
ADD T0, X2, T1
MOV T1, g_stackguard0(g)
MOV T1, g_stackguard1(g)
MOV T1, (g_stack+stack_lo)(g)
MOV X2, (g_stack+stack_hi)(g)
// if there is a _cgo_init, call it using the gcc ABI.
MOV _cgo_init(SB), T0
BEQ T0, ZERO, nocgo
MOV ZERO, A3 // arg 3: not used
MOV ZERO, A2 // arg 2: not used
MOV $setg_gcc<>(SB), A1 // arg 1: setg
MOV g, A0 // arg 0: G
JALR RA, T0
nocgo:
// update stackguard after _cgo_init
MOV (g_stack+stack_lo)(g), T0
ADD $const_stackGuard, T0
MOV T0, g_stackguard0(g)
MOV T0, g_stackguard1(g)
// set the per-goroutine and per-mach "registers"
MOV $runtime·m0(SB), T0
// save m->g0 = g0
MOV g, m_g0(T0)
// save m0 to g0->m
MOV T0, g_m(g)
CALL runtime·check(SB)
// args are already prepared
CALL runtime·args(SB)
CALL runtime·osinit(SB)
CALL runtime·schedinit(SB)
// create a new goroutine to start program
MOV $runtime·mainPC(SB), T0 // entry
SUB $16, X2
MOV T0, 8(X2)
MOV ZERO, 0(X2)
CALL runtime·newproc(SB)
ADD $16, X2
// start this M
CALL runtime·mstart(SB)
WORD $0 // crash if reached
RET
TEXT runtime·mstart(SB),NOSPLIT|TOPFRAME,$0
CALL runtime·mstart0(SB)
RET // not reached
// void setg_gcc(G*); set g called from gcc with g in A0
TEXT setg_gcc<>(SB),NOSPLIT,$0-0
MOV A0, g
CALL runtime·save_g(SB)
RET
// func cputicks() int64
TEXT runtime·cputicks(SB),NOSPLIT,$0-8
// RDTIME to emulate cpu ticks
// RDCYCLE reads counter that is per HART(core) based
// according to the riscv manual, see issue 46737
RDTIME A0
MOV A0, ret+0(FP)
RET
// systemstack_switch is a dummy routine that systemstack leaves at the bottom
// of the G stack. We need to distinguish the routine that
// lives at the bottom of the G stack from the one that lives
// at the top of the system stack because the one at the top of
// the system stack terminates the stack walk (see topofstack()).
TEXT runtime·systemstack_switch(SB), NOSPLIT, $0-0
UNDEF
JALR RA, ZERO // make sure this function is not leaf
RET
// func systemstack(fn func())
TEXT runtime·systemstack(SB), NOSPLIT, $0-8
MOV fn+0(FP), CTXT // CTXT = fn
MOV g_m(g), T0 // T0 = m
MOV m_gsignal(T0), T1 // T1 = gsignal
BEQ g, T1, noswitch
MOV m_g0(T0), T1 // T1 = g0
BEQ g, T1, noswitch
MOV m_curg(T0), T2
BEQ g, T2, switch
// Bad: g is not gsignal, not g0, not curg. What is it?
// Hide call from linker nosplit analysis.
MOV $runtime·badsystemstack(SB), T1
JALR RA, T1
switch:
// save our state in g->sched. Pretend to
// be systemstack_switch if the G stack is scanned.
CALL gosave_systemstack_switch<>(SB)
// switch to g0
MOV T1, g
CALL runtime·save_g(SB)
MOV (g_sched+gobuf_sp)(g), T0
MOV T0, X2
// call target function
MOV 0(CTXT), T1 // code pointer
JALR RA, T1
// switch back to g
MOV g_m(g), T0
MOV m_curg(T0), g
CALL runtime·save_g(SB)
MOV (g_sched+gobuf_sp)(g), X2
MOV ZERO, (g_sched+gobuf_sp)(g)
RET
noswitch:
// already on m stack, just call directly
// Using a tail call here cleans up tracebacks since we won't stop
// at an intermediate systemstack.
MOV 0(CTXT), T1 // code pointer
ADD $8, X2
JMP (T1)
// func switchToCrashStack0(fn func())
TEXT runtime·switchToCrashStack0<ABIInternal>(SB), NOSPLIT, $0-8
MOV X10, CTXT // context register
MOV g_m(g), X11 // curm
// set g to gcrash
MOV $runtime·gcrash(SB), g // g = &gcrash
CALL runtime·save_g(SB) // clobbers X31
MOV X11, g_m(g) // g.m = curm
MOV g, m_g0(X11) // curm.g0 = g
// switch to crashstack
MOV (g_stack+stack_hi)(g), X11
SUB $(4*8), X11
MOV X11, X2
// call target function
MOV 0(CTXT), X10
JALR X1, X10
// should never return
CALL runtime·abort(SB)
UNDEF
/*
* support for morestack
*/
// Called during function prolog when more stack is needed.
// Called with return address (i.e. caller's PC) in X5 (aka T0),
// and the LR register contains the caller's LR.
//
// The traceback routines see morestack on a g0 as being
// the top of a stack (for example, morestack calling newstack
// calling the scheduler calling newm calling gc), so we must
// record an argument size. For that purpose, it has no arguments.
// func morestack()
TEXT runtime·morestack(SB),NOSPLIT|NOFRAME,$0-0
// Called from f.
// Set g->sched to context in f.
MOV X2, (g_sched+gobuf_sp)(g)
MOV T0, (g_sched+gobuf_pc)(g)
MOV RA, (g_sched+gobuf_lr)(g)
MOV CTXT, (g_sched+gobuf_ctxt)(g)
// Cannot grow scheduler stack (m->g0).
MOV g_m(g), A0
MOV m_g0(A0), A1
BNE g, A1, 3(PC)
CALL runtime·badmorestackg0(SB)
CALL runtime·abort(SB)
// Cannot grow signal stack (m->gsignal).
MOV m_gsignal(A0), A1
BNE g, A1, 3(PC)
CALL runtime·badmorestackgsignal(SB)
CALL runtime·abort(SB)
// Called from f.
// Set m->morebuf to f's caller.
MOV RA, (m_morebuf+gobuf_pc)(A0) // f's caller's PC
MOV X2, (m_morebuf+gobuf_sp)(A0) // f's caller's SP
MOV g, (m_morebuf+gobuf_g)(A0)
// Call newstack on m->g0's stack.
MOV m_g0(A0), g
CALL runtime·save_g(SB)
MOV (g_sched+gobuf_sp)(g), X2
// Create a stack frame on g0 to call newstack.
MOV ZERO, -8(X2) // Zero saved LR in frame
SUB $8, X2
CALL runtime·newstack(SB)
// Not reached, but make sure the return PC from the call to newstack
// is still in this function, and not the beginning of the next.
UNDEF
// func morestack_noctxt()
TEXT runtime·morestack_noctxt(SB),NOSPLIT|NOFRAME,$0-0
// Force SPWRITE. This function doesn't actually write SP,
// but it is called with a special calling convention where
// the caller doesn't save LR on stack but passes it as a
// register, and the unwinder currently doesn't understand.
// Make it SPWRITE to stop unwinding. (See issue 54332)
MOV X2, X2
MOV ZERO, CTXT
JMP runtime·morestack(SB)
// AES hashing not implemented for riscv64
TEXT runtime·memhash<ABIInternal>(SB),NOSPLIT|NOFRAME,$0-32
JMP runtime·memhashFallback<ABIInternal>(SB)
TEXT runtime·strhash<ABIInternal>(SB),NOSPLIT|NOFRAME,$0-24
JMP runtime·strhashFallback<ABIInternal>(SB)
TEXT runtime·memhash32<ABIInternal>(SB),NOSPLIT|NOFRAME,$0-24
JMP runtime·memhash32Fallback<ABIInternal>(SB)
TEXT runtime·memhash64<ABIInternal>(SB),NOSPLIT|NOFRAME,$0-24
JMP runtime·memhash64Fallback<ABIInternal>(SB)
// func return0()
TEXT runtime·return0(SB), NOSPLIT, $0
MOV $0, A0
RET
// restore state from Gobuf; longjmp
// func gogo(buf *gobuf)
TEXT runtime·gogo(SB), NOSPLIT|NOFRAME, $0-8
MOV buf+0(FP), T0
MOV gobuf_g(T0), T1
MOV 0(T1), ZERO // make sure g != nil
JMP gogo<>(SB)
TEXT gogo<>(SB), NOSPLIT|NOFRAME, $0
MOV T1, g
CALL runtime·save_g(SB)
MOV gobuf_sp(T0), X2
MOV gobuf_lr(T0), RA
MOV gobuf_ret(T0), A0
MOV gobuf_ctxt(T0), CTXT
MOV ZERO, gobuf_sp(T0)
MOV ZERO, gobuf_ret(T0)
MOV ZERO, gobuf_lr(T0)
MOV ZERO, gobuf_ctxt(T0)
MOV gobuf_pc(T0), T0
JALR ZERO, T0
// func procyield(cycles uint32)
TEXT runtime·procyield(SB),NOSPLIT,$0-0
RET
// Switch to m->g0's stack, call fn(g).
// Fn must never return. It should gogo(&g->sched)
// to keep running g.
// func mcall(fn func(*g))
TEXT runtime·mcall<ABIInternal>(SB), NOSPLIT|NOFRAME, $0-8
MOV X10, CTXT
// Save caller state in g->sched
MOV X2, (g_sched+gobuf_sp)(g)
MOV RA, (g_sched+gobuf_pc)(g)
MOV ZERO, (g_sched+gobuf_lr)(g)
// Switch to m->g0 & its stack, call fn.
MOV g, X10
MOV g_m(g), T1
MOV m_g0(T1), g
CALL runtime·save_g(SB)
BNE g, X10, 2(PC)
JMP runtime·badmcall(SB)
MOV 0(CTXT), T1 // code pointer
MOV (g_sched+gobuf_sp)(g), X2 // sp = m->g0->sched.sp
// we don't need special macro for regabi since arg0(X10) = g
SUB $16, X2
MOV X10, 8(X2) // setup g
MOV ZERO, 0(X2) // clear return address
JALR RA, T1
JMP runtime·badmcall2(SB)
// Save state of caller into g->sched,
// but using fake PC from systemstack_switch.
// Must only be called from functions with no locals ($0)
// or else unwinding from systemstack_switch is incorrect.
// Smashes X31.
TEXT gosave_systemstack_switch<>(SB),NOSPLIT|NOFRAME,$0
MOV $runtime·systemstack_switch(SB), X31
ADD $8, X31 // get past prologue
MOV X31, (g_sched+gobuf_pc)(g)
MOV X2, (g_sched+gobuf_sp)(g)
MOV ZERO, (g_sched+gobuf_lr)(g)
MOV ZERO, (g_sched+gobuf_ret)(g)
// Assert ctxt is zero. See func save.
MOV (g_sched+gobuf_ctxt)(g), X31
BEQ ZERO, X31, 2(PC)
CALL runtime·abort(SB)
RET
// func asmcgocall_no_g(fn, arg unsafe.Pointer)
// Call fn(arg) aligned appropriately for the gcc ABI.
// Called on a system stack, and there may be no g yet (during needm).
TEXT ·asmcgocall_no_g(SB),NOSPLIT,$0-16
MOV fn+0(FP), X5
MOV arg+8(FP), X10
JALR RA, (X5)
RET
// func asmcgocall(fn, arg unsafe.Pointer) int32
// Call fn(arg) on the scheduler stack,
// aligned appropriately for the gcc ABI.
// See cgocall.go for more details.
TEXT ·asmcgocall(SB),NOSPLIT,$0-20
MOV fn+0(FP), X5
MOV arg+8(FP), X10
MOV X2, X8 // save original stack pointer
MOV g, X9
// Figure out if we need to switch to m->g0 stack.
// We get called to create new OS threads too, and those
// come in on the m->g0 stack already. Or we might already
// be on the m->gsignal stack.
MOV g_m(g), X6
MOV m_gsignal(X6), X7
BEQ X7, g, g0
MOV m_g0(X6), X7
BEQ X7, g, g0
CALL gosave_systemstack_switch<>(SB)
MOV X7, g
CALL runtime·save_g(SB)
MOV (g_sched+gobuf_sp)(g), X2
// Now on a scheduling stack (a pthread-created stack).
g0:
// Save room for two of our pointers.
SUB $16, X2
MOV X9, 0(X2) // save old g on stack
MOV (g_stack+stack_hi)(X9), X9
SUB X8, X9, X8
MOV X8, 8(X2) // save depth in old g stack (can't just save SP, as stack might be copied during a callback)
JALR RA, (X5)
// Restore g, stack pointer. X10 is return value.
MOV 0(X2), g
CALL runtime·save_g(SB)
MOV (g_stack+stack_hi)(g), X5
MOV 8(X2), X6
SUB X6, X5, X6
MOV X6, X2
MOVW X10, ret+16(FP)
RET
// func asminit()
TEXT runtime·asminit(SB),NOSPLIT|NOFRAME,$0-0
RET
// reflectcall: call a function with the given argument list
// func call(stackArgsType *_type, f *FuncVal, stackArgs *byte, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs).
// we don't have variable-sized frames, so we use a small number
// of constant-sized-frame functions to encode a few bits of size in the pc.
// Caution: ugly multiline assembly macros in your future!
#define DISPATCH(NAME,MAXSIZE) \
MOV $MAXSIZE, T1 \
BLTU T1, T0, 3(PC) \
MOV $NAME(SB), T2; \
JALR ZERO, T2
// Note: can't just "BR NAME(SB)" - bad inlining results.
// func call(stackArgsType *rtype, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs).
TEXT reflect·call(SB), NOSPLIT, $0-0
JMP ·reflectcall(SB)
// func call(stackArgsType *_type, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs).
TEXT ·reflectcall(SB), NOSPLIT|NOFRAME, $0-48
MOVWU frameSize+32(FP), T0
DISPATCH(runtime·call16, 16)
DISPATCH(runtime·call32, 32)
DISPATCH(runtime·call64, 64)
DISPATCH(runtime·call128, 128)
DISPATCH(runtime·call256, 256)
DISPATCH(runtime·call512, 512)
DISPATCH(runtime·call1024, 1024)
DISPATCH(runtime·call2048, 2048)
DISPATCH(runtime·call4096, 4096)
DISPATCH(runtime·call8192, 8192)
DISPATCH(runtime·call16384, 16384)
DISPATCH(runtime·call32768, 32768)
DISPATCH(runtime·call65536, 65536)
DISPATCH(runtime·call131072, 131072)
DISPATCH(runtime·call262144, 262144)
DISPATCH(runtime·call524288, 524288)
DISPATCH(runtime·call1048576, 1048576)
DISPATCH(runtime·call2097152, 2097152)
DISPATCH(runtime·call4194304, 4194304)
DISPATCH(runtime·call8388608, 8388608)
DISPATCH(runtime·call16777216, 16777216)
DISPATCH(runtime·call33554432, 33554432)
DISPATCH(runtime·call67108864, 67108864)
DISPATCH(runtime·call134217728, 134217728)
DISPATCH(runtime·call268435456, 268435456)
DISPATCH(runtime·call536870912, 536870912)
DISPATCH(runtime·call1073741824, 1073741824)
MOV $runtime·badreflectcall(SB), T2
JALR ZERO, T2
#define CALLFN(NAME,MAXSIZE) \
TEXT NAME(SB), WRAPPER, $MAXSIZE-48; \
NO_LOCAL_POINTERS; \
/* copy arguments to stack */ \
MOV stackArgs+16(FP), A1; \
MOVWU stackArgsSize+24(FP), A2; \
MOV X2, A3; \
ADD $8, A3; \
ADD A3, A2; \
BEQ A3, A2, 6(PC); \
MOVBU (A1), A4; \
ADD $1, A1; \
MOVB A4, (A3); \
ADD $1, A3; \
JMP -5(PC); \
/* set up argument registers */ \
MOV regArgs+40(FP), X25; \
CALL ·unspillArgs(SB); \
/* call function */ \
MOV f+8(FP), CTXT; \
MOV (CTXT), X25; \
PCDATA $PCDATA_StackMapIndex, $0; \
JALR RA, X25; \
/* copy return values back */ \
MOV regArgs+40(FP), X25; \
CALL ·spillArgs(SB); \
MOV stackArgsType+0(FP), A5; \
MOV stackArgs+16(FP), A1; \
MOVWU stackArgsSize+24(FP), A2; \
MOVWU stackRetOffset+28(FP), A4; \
ADD $8, X2, A3; \
ADD A4, A3; \
ADD A4, A1; \
SUB A4, A2; \
CALL callRet<>(SB); \
RET
// callRet copies return values back at the end of call*. This is a
// separate function so it can allocate stack space for the arguments
// to reflectcallmove. It does not follow the Go ABI; it expects its
// arguments in registers.
TEXT callRet<>(SB), NOSPLIT, $40-0
NO_LOCAL_POINTERS
MOV A5, 8(X2)
MOV A1, 16(X2)
MOV A3, 24(X2)
MOV A2, 32(X2)
MOV X25, 40(X2)
CALL runtime·reflectcallmove(SB)
RET
CALLFN(·call16, 16)
CALLFN(·call32, 32)
CALLFN(·call64, 64)
CALLFN(·call128, 128)
CALLFN(·call256, 256)
CALLFN(·call512, 512)
CALLFN(·call1024, 1024)
CALLFN(·call2048, 2048)
CALLFN(·call4096, 4096)
CALLFN(·call8192, 8192)
CALLFN(·call16384, 16384)
CALLFN(·call32768, 32768)
CALLFN(·call65536, 65536)
CALLFN(·call131072, 131072)
CALLFN(·call262144, 262144)
CALLFN(·call524288, 524288)
CALLFN(·call1048576, 1048576)
CALLFN(·call2097152, 2097152)
CALLFN(·call4194304, 4194304)
CALLFN(·call8388608, 8388608)
CALLFN(·call16777216, 16777216)
CALLFN(·call33554432, 33554432)
CALLFN(·call67108864, 67108864)
CALLFN(·call134217728, 134217728)
CALLFN(·call268435456, 268435456)
CALLFN(·call536870912, 536870912)
CALLFN(·call1073741824, 1073741824)
// Called from cgo wrappers, this function returns g->m->curg.stack.hi.
// Must obey the gcc calling convention.
TEXT _cgo_topofstack(SB),NOSPLIT,$8
// g (X27) and REG_TMP (X31) might be clobbered by load_g.
// X27 is callee-save in the gcc calling convention, so save it.
MOV g, savedX27-8(SP)
CALL runtime·load_g(SB)
MOV g_m(g), X5
MOV m_curg(X5), X5
MOV (g_stack+stack_hi)(X5), X10 // return value in X10
MOV savedX27-8(SP), g
RET
// func goexit(neverCallThisFunction)
// The top-most function running on a goroutine
// returns to goexit+PCQuantum.
TEXT runtime·goexit(SB),NOSPLIT|NOFRAME|TOPFRAME,$0-0
MOV ZERO, ZERO // NOP
JMP runtime·goexit1(SB) // does not return
// traceback from goexit1 must hit code range of goexit
MOV ZERO, ZERO // NOP
// func cgocallback(fn, frame unsafe.Pointer, ctxt uintptr)
// See cgocall.go for more details.
TEXT ·cgocallback(SB),NOSPLIT,$24-24
NO_LOCAL_POINTERS
// Skip cgocallbackg, just dropm when fn is nil, and frame is the saved g.
// It is used to dropm while thread is exiting.
MOV fn+0(FP), X7
BNE ZERO, X7, loadg
// Restore the g from frame.
MOV frame+8(FP), g
JMP dropm
loadg:
// Load m and g from thread-local storage.
MOVBU runtime·iscgo(SB), X5
BEQ ZERO, X5, nocgo
CALL runtime·load_g(SB)
nocgo:
// If g is nil, Go did not create the current thread,
// or if this thread never called into Go on pthread platforms.
// Call needm to obtain one for temporary use.
// In this case, we're running on the thread stack, so there's
// lots of space, but the linker doesn't know. Hide the call from
// the linker analysis by using an indirect call.
BEQ ZERO, g, needm
MOV g_m(g), X5
MOV X5, savedm-8(SP)
JMP havem
needm:
MOV g, savedm-8(SP) // g is zero, so is m.
MOV $runtime·needAndBindM(SB), X6
JALR RA, X6
// Set m->sched.sp = SP, so that if a panic happens
// during the function we are about to execute, it will
// have a valid SP to run on the g0 stack.
// The next few lines (after the havem label)
// will save this SP onto the stack and then write
// the same SP back to m->sched.sp. That seems redundant,
// but if an unrecovered panic happens, unwindm will
// restore the g->sched.sp from the stack location
// and then systemstack will try to use it. If we don't set it here,
// that restored SP will be uninitialized (typically 0) and
// will not be usable.
MOV g_m(g), X5
MOV m_g0(X5), X6
MOV X2, (g_sched+gobuf_sp)(X6)
havem:
// Now there's a valid m, and we're running on its m->g0.
// Save current m->g0->sched.sp on stack and then set it to SP.
// Save current sp in m->g0->sched.sp in preparation for
// switch back to m->curg stack.
// NOTE: unwindm knows that the saved g->sched.sp is at 8(X2) aka savedsp-24(SP).
MOV m_g0(X5), X6
MOV (g_sched+gobuf_sp)(X6), X7
MOV X7, savedsp-24(SP) // must match frame size
MOV X2, (g_sched+gobuf_sp)(X6)
// Switch to m->curg stack and call runtime.cgocallbackg.
// Because we are taking over the execution of m->curg
// but *not* resuming what had been running, we need to
// save that information (m->curg->sched) so we can restore it.
// We can restore m->curg->sched.sp easily, because calling
// runtime.cgocallbackg leaves SP unchanged upon return.
// To save m->curg->sched.pc, we push it onto the curg stack and
// open a frame the same size as cgocallback's g0 frame.
// Once we switch to the curg stack, the pushed PC will appear
// to be the return PC of cgocallback, so that the traceback
// will seamlessly trace back into the earlier calls.
MOV m_curg(X5), g
CALL runtime·save_g(SB)
MOV (g_sched+gobuf_sp)(g), X6 // prepare stack as X6
MOV (g_sched+gobuf_pc)(g), X7
MOV X7, -(24+8)(X6) // "saved LR"; must match frame size
// Gather our arguments into registers.
MOV fn+0(FP), X7
MOV frame+8(FP), X8
MOV ctxt+16(FP), X9
MOV $-(24+8)(X6), X2 // switch stack; must match frame size
MOV X7, 8(X2)
MOV X8, 16(X2)
MOV X9, 24(X2)
CALL runtime·cgocallbackg(SB)
// Restore g->sched (== m->curg->sched) from saved values.
MOV 0(X2), X7
MOV X7, (g_sched+gobuf_pc)(g)
MOV $(24+8)(X2), X6 // must match frame size
MOV X6, (g_sched+gobuf_sp)(g)
// Switch back to m->g0's stack and restore m->g0->sched.sp.
// (Unlike m->curg, the g0 goroutine never uses sched.pc,
// so we do not have to restore it.)
MOV g_m(g), X5
MOV m_g0(X5), g
CALL runtime·save_g(SB)
MOV (g_sched+gobuf_sp)(g), X2
MOV savedsp-24(SP), X6 // must match frame size
MOV X6, (g_sched+gobuf_sp)(g)
// If the m on entry was nil, we called needm above to borrow an m,
// 1. for the duration of the call on non-pthread platforms,
// 2. or the duration of the C thread alive on pthread platforms.
// If the m on entry wasn't nil,
// 1. the thread might be a Go thread,
// 2. or it wasn't the first call from a C thread on pthread platforms,
// since then we skip dropm to reuse the m in the first call.
MOV savedm-8(SP), X5
BNE ZERO, X5, droppedm
// Skip dropm to reuse it in the next call, when a pthread key has been created.
MOV _cgo_pthread_key_created(SB), X5
// It means cgo is disabled when _cgo_pthread_key_created is a nil pointer, need dropm.
BEQ ZERO, X5, dropm
MOV (X5), X5
BNE ZERO, X5, droppedm
dropm:
MOV $runtime·dropm(SB), X6
JALR RA, X6
droppedm:
// Done!
RET
TEXT runtime·breakpoint(SB),NOSPLIT|NOFRAME,$0-0
EBREAK
RET
TEXT runtime·abort(SB),NOSPLIT|NOFRAME,$0-0
EBREAK
RET
// void setg(G*); set g. for use by needm.
TEXT runtime·setg(SB), NOSPLIT, $0-8
MOV gg+0(FP), g
// This only happens if iscgo, so jump straight to save_g
CALL runtime·save_g(SB)
RET
TEXT ·checkASM(SB),NOSPLIT,$0-1
MOV $1, T0
MOV T0, ret+0(FP)
RET
// spillArgs stores return values from registers to a *internal/abi.RegArgs in X25.
TEXT ·spillArgs(SB),NOSPLIT,$0-0
MOV X10, (0*8)(X25)
MOV X11, (1*8)(X25)
MOV X12, (2*8)(X25)
MOV X13, (3*8)(X25)
MOV X14, (4*8)(X25)
MOV X15, (5*8)(X25)
MOV X16, (6*8)(X25)
MOV X17, (7*8)(X25)
MOV X8, (8*8)(X25)
MOV X9, (9*8)(X25)
MOV X18, (10*8)(X25)
MOV X19, (11*8)(X25)
MOV X20, (12*8)(X25)
MOV X21, (13*8)(X25)
MOV X22, (14*8)(X25)
MOV X23, (15*8)(X25)
MOVD F10, (16*8)(X25)
MOVD F11, (17*8)(X25)
MOVD F12, (18*8)(X25)
MOVD F13, (19*8)(X25)
MOVD F14, (20*8)(X25)
MOVD F15, (21*8)(X25)
MOVD F16, (22*8)(X25)
MOVD F17, (23*8)(X25)
MOVD F8, (24*8)(X25)
MOVD F9, (25*8)(X25)
MOVD F18, (26*8)(X25)
MOVD F19, (27*8)(X25)
MOVD F20, (28*8)(X25)
MOVD F21, (29*8)(X25)
MOVD F22, (30*8)(X25)
MOVD F23, (31*8)(X25)
RET
// unspillArgs loads args into registers from a *internal/abi.RegArgs in X25.
TEXT ·unspillArgs(SB),NOSPLIT,$0-0
MOV (0*8)(X25), X10
MOV (1*8)(X25), X11
MOV (2*8)(X25), X12
MOV (3*8)(X25), X13
MOV (4*8)(X25), X14
MOV (5*8)(X25), X15
MOV (6*8)(X25), X16
MOV (7*8)(X25), X17
MOV (8*8)(X25), X8
MOV (9*8)(X25), X9
MOV (10*8)(X25), X18
MOV (11*8)(X25), X19
MOV (12*8)(X25), X20
MOV (13*8)(X25), X21
MOV (14*8)(X25), X22
MOV (15*8)(X25), X23
MOVD (16*8)(X25), F10
MOVD (17*8)(X25), F11
MOVD (18*8)(X25), F12
MOVD (19*8)(X25), F13
MOVD (20*8)(X25), F14
MOVD (21*8)(X25), F15
MOVD (22*8)(X25), F16
MOVD (23*8)(X25), F17
MOVD (24*8)(X25), F8
MOVD (25*8)(X25), F9
MOVD (26*8)(X25), F18
MOVD (27*8)(X25), F19
MOVD (28*8)(X25), F20
MOVD (29*8)(X25), F21
MOVD (30*8)(X25), F22
MOVD (31*8)(X25), F23
RET
// gcWriteBarrier informs the GC about heap pointer writes.
//
// gcWriteBarrier does NOT follow the Go ABI. It accepts the
// number of bytes of buffer needed in X24, and returns a pointer
// to the buffer space in X24.
// It clobbers X31 aka T6 (the linker temp register - REG_TMP).
// The act of CALLing gcWriteBarrier will clobber RA (LR).
// It does not clobber any other general-purpose registers,
// but may clobber others (e.g., floating point registers).
TEXT gcWriteBarrier<>(SB),NOSPLIT,$208
// Save the registers clobbered by the fast path.
MOV A0, 24*8(X2)
MOV A1, 25*8(X2)
retry:
MOV g_m(g), A0
MOV m_p(A0), A0
MOV (p_wbBuf+wbBuf_next)(A0), A1
MOV (p_wbBuf+wbBuf_end)(A0), T6 // T6 is linker temp register (REG_TMP)
// Increment wbBuf.next position.
ADD X24, A1
// Is the buffer full?
BLTU T6, A1, flush
// Commit to the larger buffer.
MOV A1, (p_wbBuf+wbBuf_next)(A0)
// Make the return value (the original next position)
SUB X24, A1, X24
// Restore registers.
MOV 24*8(X2), A0
MOV 25*8(X2), A1
RET
flush:
// Save all general purpose registers since these could be
// clobbered by wbBufFlush and were not saved by the caller.
MOV T0, 1*8(X2)
MOV T1, 2*8(X2)
// X0 is zero register
// X1 is LR, saved by prologue
// X2 is SP
// X3 is GP
// X4 is TP
MOV X7, 3*8(X2)
MOV X8, 4*8(X2)
MOV X9, 5*8(X2)
// X10 already saved (A0)
// X11 already saved (A1)
MOV X12, 6*8(X2)
MOV X13, 7*8(X2)
MOV X14, 8*8(X2)
MOV X15, 9*8(X2)
MOV X16, 10*8(X2)
MOV X17, 11*8(X2)
MOV X18, 12*8(X2)
MOV X19, 13*8(X2)
MOV X20, 14*8(X2)
MOV X21, 15*8(X2)
MOV X22, 16*8(X2)
MOV X23, 17*8(X2)
MOV X24, 18*8(X2)
MOV X25, 19*8(X2)
MOV X26, 20*8(X2)
// X27 is g.
MOV X28, 21*8(X2)
MOV X29, 22*8(X2)
MOV X30, 23*8(X2)
// X31 is tmp register.
CALL runtime·wbBufFlush(SB)
MOV 1*8(X2), T0
MOV 2*8(X2), T1
MOV 3*8(X2), X7
MOV 4*8(X2), X8
MOV 5*8(X2), X9
MOV 6*8(X2), X12
MOV 7*8(X2), X13
MOV 8*8(X2), X14
MOV 9*8(X2), X15
MOV 10*8(X2), X16
MOV 11*8(X2), X17
MOV 12*8(X2), X18
MOV 13*8(X2), X19
MOV 14*8(X2), X20
MOV 15*8(X2), X21
MOV 16*8(X2), X22
MOV 17*8(X2), X23
MOV 18*8(X2), X24
MOV 19*8(X2), X25
MOV 20*8(X2), X26
MOV 21*8(X2), X28
MOV 22*8(X2), X29
MOV 23*8(X2), X30
JMP retry
TEXT runtime·gcWriteBarrier1<ABIInternal>(SB),NOSPLIT,$0
MOV $8, X24
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier2<ABIInternal>(SB),NOSPLIT,$0
MOV $16, X24
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier3<ABIInternal>(SB),NOSPLIT,$0
MOV $24, X24
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier4<ABIInternal>(SB),NOSPLIT,$0
MOV $32, X24
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier5<ABIInternal>(SB),NOSPLIT,$0
MOV $40, X24
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier6<ABIInternal>(SB),NOSPLIT,$0
MOV $48, X24
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier7<ABIInternal>(SB),NOSPLIT,$0
MOV $56, X24
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier8<ABIInternal>(SB),NOSPLIT,$0
MOV $64, X24
JMP gcWriteBarrier<>(SB)
// Note: these functions use a special calling convention to save generated code space.
// Arguments are passed in registers (ssa/gen/RISCV64Ops.go), but the space for those
// arguments are allocated in the caller's stack frame.
// These stubs write the args into that stack space and then tail call to the
// corresponding runtime handler.
// The tail call makes these stubs disappear in backtraces.
TEXT runtime·panicIndex<ABIInternal>(SB),NOSPLIT,$0-16
MOV T0, X10
MOV T1, X11
JMP runtime·goPanicIndex<ABIInternal>(SB)
TEXT runtime·panicIndexU<ABIInternal>(SB),NOSPLIT,$0-16
MOV T0, X10
MOV T1, X11
JMP runtime·goPanicIndexU<ABIInternal>(SB)
TEXT runtime·panicSliceAlen<ABIInternal>(SB),NOSPLIT,$0-16
MOV T1, X10
MOV T2, X11
JMP runtime·goPanicSliceAlen<ABIInternal>(SB)
TEXT runtime·panicSliceAlenU<ABIInternal>(SB),NOSPLIT,$0-16
MOV T1, X10
MOV T2, X11
JMP runtime·goPanicSliceAlenU<ABIInternal>(SB)
TEXT runtime·panicSliceAcap<ABIInternal>(SB),NOSPLIT,$0-16
MOV T1, X10
MOV T2, X11
JMP runtime·goPanicSliceAcap<ABIInternal>(SB)
TEXT runtime·panicSliceAcapU<ABIInternal>(SB),NOSPLIT,$0-16
MOV T1, X10
MOV T2, X11
JMP runtime·goPanicSliceAcapU<ABIInternal>(SB)
TEXT runtime·panicSliceB<ABIInternal>(SB),NOSPLIT,$0-16
MOV T0, X10
MOV T1, X11
JMP runtime·goPanicSliceB<ABIInternal>(SB)
TEXT runtime·panicSliceBU<ABIInternal>(SB),NOSPLIT,$0-16
MOV T0, X10
MOV T1, X11
JMP runtime·goPanicSliceBU<ABIInternal>(SB)
TEXT runtime·panicSlice3Alen<ABIInternal>(SB),NOSPLIT,$0-16
MOV T2, X10
MOV T3, X11
JMP runtime·goPanicSlice3Alen<ABIInternal>(SB)
TEXT runtime·panicSlice3AlenU<ABIInternal>(SB),NOSPLIT,$0-16
MOV T2, X10
MOV T3, X11
JMP runtime·goPanicSlice3AlenU<ABIInternal>(SB)
TEXT runtime·panicSlice3Acap<ABIInternal>(SB),NOSPLIT,$0-16
MOV T2, X10
MOV T3, X11
JMP runtime·goPanicSlice3Acap<ABIInternal>(SB)
TEXT runtime·panicSlice3AcapU<ABIInternal>(SB),NOSPLIT,$0-16
MOV T2, X10
MOV T3, X11
JMP runtime·goPanicSlice3AcapU<ABIInternal>(SB)
TEXT runtime·panicSlice3B<ABIInternal>(SB),NOSPLIT,$0-16
MOV T1, X10
MOV T2, X11
JMP runtime·goPanicSlice3B<ABIInternal>(SB)
TEXT runtime·panicSlice3BU<ABIInternal>(SB),NOSPLIT,$0-16
MOV T1, X10
MOV T2, X11
JMP runtime·goPanicSlice3BU<ABIInternal>(SB)
TEXT runtime·panicSlice3C<ABIInternal>(SB),NOSPLIT,$0-16
MOV T0, X10
MOV T1, X11
JMP runtime·goPanicSlice3C<ABIInternal>(SB)
TEXT runtime·panicSlice3CU<ABIInternal>(SB),NOSPLIT,$0-16
MOV T0, X10
MOV T1, X11
JMP runtime·goPanicSlice3CU<ABIInternal>(SB)
TEXT runtime·panicSliceConvert<ABIInternal>(SB),NOSPLIT,$0-16
MOV T2, X10
MOV T3, X11
JMP runtime·goPanicSliceConvert<ABIInternal>(SB)
DATA runtime·mainPC+0(SB)/8,$runtime·main<ABIInternal>(SB)
GLOBL runtime·mainPC(SB),RODATA,$8

974
src/runtime/asm_s390x.s Normal file
View File

@@ -0,0 +1,974 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include "go_asm.h"
#include "go_tls.h"
#include "funcdata.h"
#include "textflag.h"
// _rt0_s390x_lib is common startup code for s390x systems when
// using -buildmode=c-archive or -buildmode=c-shared. The linker will
// arrange to invoke this function as a global constructor (for
// c-archive) or when the shared library is loaded (for c-shared).
// We expect argc and argv to be passed in the usual C ABI registers
// R2 and R3.
TEXT _rt0_s390x_lib(SB), NOSPLIT|NOFRAME, $0
STMG R6, R15, 48(R15)
MOVD R2, _rt0_s390x_lib_argc<>(SB)
MOVD R3, _rt0_s390x_lib_argv<>(SB)
// Save R6-R15 in the register save area of the calling function.
STMG R6, R15, 48(R15)
// Allocate 80 bytes on the stack.
MOVD $-80(R15), R15
// Save F8-F15 in our stack frame.
FMOVD F8, 16(R15)
FMOVD F9, 24(R15)
FMOVD F10, 32(R15)
FMOVD F11, 40(R15)
FMOVD F12, 48(R15)
FMOVD F13, 56(R15)
FMOVD F14, 64(R15)
FMOVD F15, 72(R15)
// Synchronous initialization.
MOVD $runtime·libpreinit(SB), R1
BL R1
// Create a new thread to finish Go runtime initialization.
MOVD _cgo_sys_thread_create(SB), R1
CMP R1, $0
BEQ nocgo
MOVD $_rt0_s390x_lib_go(SB), R2
MOVD $0, R3
BL R1
BR restore
nocgo:
MOVD $0x800000, R1 // stacksize
MOVD R1, 0(R15)
MOVD $_rt0_s390x_lib_go(SB), R1
MOVD R1, 8(R15) // fn
MOVD $runtime·newosproc(SB), R1
BL R1
restore:
// Restore F8-F15 from our stack frame.
FMOVD 16(R15), F8
FMOVD 24(R15), F9
FMOVD 32(R15), F10
FMOVD 40(R15), F11
FMOVD 48(R15), F12
FMOVD 56(R15), F13
FMOVD 64(R15), F14
FMOVD 72(R15), F15
MOVD $80(R15), R15
// Restore R6-R15.
LMG 48(R15), R6, R15
RET
// _rt0_s390x_lib_go initializes the Go runtime.
// This is started in a separate thread by _rt0_s390x_lib.
TEXT _rt0_s390x_lib_go(SB), NOSPLIT|NOFRAME, $0
MOVD _rt0_s390x_lib_argc<>(SB), R2
MOVD _rt0_s390x_lib_argv<>(SB), R3
MOVD $runtime·rt0_go(SB), R1
BR R1
DATA _rt0_s390x_lib_argc<>(SB)/8, $0
GLOBL _rt0_s390x_lib_argc<>(SB), NOPTR, $8
DATA _rt0_s90x_lib_argv<>(SB)/8, $0
GLOBL _rt0_s390x_lib_argv<>(SB), NOPTR, $8
TEXT runtime·rt0_go(SB),NOSPLIT|TOPFRAME,$0
// R2 = argc; R3 = argv; R11 = temp; R13 = g; R15 = stack pointer
// C TLS base pointer in AR0:AR1
// initialize essential registers
XOR R0, R0
SUB $24, R15
MOVW R2, 8(R15) // argc
MOVD R3, 16(R15) // argv
// create istack out of the given (operating system) stack.
// _cgo_init may update stackguard.
MOVD $runtime·g0(SB), g
MOVD R15, R11
SUB $(64*1024), R11
MOVD R11, g_stackguard0(g)
MOVD R11, g_stackguard1(g)
MOVD R11, (g_stack+stack_lo)(g)
MOVD R15, (g_stack+stack_hi)(g)
// if there is a _cgo_init, call it using the gcc ABI.
MOVD _cgo_init(SB), R11
CMPBEQ R11, $0, nocgo
MOVW AR0, R4 // (AR0 << 32 | AR1) is the TLS base pointer; MOVD is translated to EAR
SLD $32, R4, R4
MOVW AR1, R4 // arg 2: TLS base pointer
MOVD $setg_gcc<>(SB), R3 // arg 1: setg
MOVD g, R2 // arg 0: G
// C functions expect 160 bytes of space on caller stack frame
// and an 8-byte aligned stack pointer
MOVD R15, R9 // save current stack (R9 is preserved in the Linux ABI)
SUB $160, R15 // reserve 160 bytes
MOVD $~7, R6
AND R6, R15 // 8-byte align
BL R11 // this call clobbers volatile registers according to Linux ABI (R0-R5, R14)
MOVD R9, R15 // restore stack
XOR R0, R0 // zero R0
nocgo:
// update stackguard after _cgo_init
MOVD (g_stack+stack_lo)(g), R2
ADD $const_stackGuard, R2
MOVD R2, g_stackguard0(g)
MOVD R2, g_stackguard1(g)
// set the per-goroutine and per-mach "registers"
MOVD $runtime·m0(SB), R2
// save m->g0 = g0
MOVD g, m_g0(R2)
// save m0 to g0->m
MOVD R2, g_m(g)
BL runtime·check(SB)
// argc/argv are already prepared on stack
BL runtime·args(SB)
BL runtime·checkS390xCPU(SB)
BL runtime·osinit(SB)
BL runtime·schedinit(SB)
// create a new goroutine to start program
MOVD $runtime·mainPC(SB), R2 // entry
SUB $16, R15
MOVD R2, 8(R15)
MOVD $0, 0(R15)
BL runtime·newproc(SB)
ADD $16, R15
// start this M
BL runtime·mstart(SB)
MOVD $0, 1(R0)
RET
DATA runtime·mainPC+0(SB)/8,$runtime·main(SB)
GLOBL runtime·mainPC(SB),RODATA,$8
TEXT runtime·breakpoint(SB),NOSPLIT|NOFRAME,$0-0
BRRK
RET
TEXT runtime·asminit(SB),NOSPLIT|NOFRAME,$0-0
RET
TEXT runtime·mstart(SB),NOSPLIT|TOPFRAME,$0
CALL runtime·mstart0(SB)
RET // not reached
/*
* go-routine
*/
// void gogo(Gobuf*)
// restore state from Gobuf; longjmp
TEXT runtime·gogo(SB), NOSPLIT|NOFRAME, $0-8
MOVD buf+0(FP), R5
MOVD gobuf_g(R5), R6
MOVD 0(R6), R7 // make sure g != nil
BR gogo<>(SB)
TEXT gogo<>(SB), NOSPLIT|NOFRAME, $0
MOVD R6, g
BL runtime·save_g(SB)
MOVD 0(g), R4
MOVD gobuf_sp(R5), R15
MOVD gobuf_lr(R5), LR
MOVD gobuf_ret(R5), R3
MOVD gobuf_ctxt(R5), R12
MOVD $0, gobuf_sp(R5)
MOVD $0, gobuf_ret(R5)
MOVD $0, gobuf_lr(R5)
MOVD $0, gobuf_ctxt(R5)
CMP R0, R0 // set condition codes for == test, needed by stack split
MOVD gobuf_pc(R5), R6
BR (R6)
// void mcall(fn func(*g))
// Switch to m->g0's stack, call fn(g).
// Fn must never return. It should gogo(&g->sched)
// to keep running g.
TEXT runtime·mcall(SB), NOSPLIT, $-8-8
// Save caller state in g->sched
MOVD R15, (g_sched+gobuf_sp)(g)
MOVD LR, (g_sched+gobuf_pc)(g)
MOVD $0, (g_sched+gobuf_lr)(g)
// Switch to m->g0 & its stack, call fn.
MOVD g, R3
MOVD g_m(g), R8
MOVD m_g0(R8), g
BL runtime·save_g(SB)
CMP g, R3
BNE 2(PC)
BR runtime·badmcall(SB)
MOVD fn+0(FP), R12 // context
MOVD 0(R12), R4 // code pointer
MOVD (g_sched+gobuf_sp)(g), R15 // sp = m->g0->sched.sp
SUB $16, R15
MOVD R3, 8(R15)
MOVD $0, 0(R15)
BL (R4)
BR runtime·badmcall2(SB)
// systemstack_switch is a dummy routine that systemstack leaves at the bottom
// of the G stack. We need to distinguish the routine that
// lives at the bottom of the G stack from the one that lives
// at the top of the system stack because the one at the top of
// the system stack terminates the stack walk (see topofstack()).
TEXT runtime·systemstack_switch(SB), NOSPLIT, $0-0
UNDEF
BL (LR) // make sure this function is not leaf
RET
// func systemstack(fn func())
TEXT runtime·systemstack(SB), NOSPLIT, $0-8
MOVD fn+0(FP), R3 // R3 = fn
MOVD R3, R12 // context
MOVD g_m(g), R4 // R4 = m
MOVD m_gsignal(R4), R5 // R5 = gsignal
CMPBEQ g, R5, noswitch
MOVD m_g0(R4), R5 // R5 = g0
CMPBEQ g, R5, noswitch
MOVD m_curg(R4), R6
CMPBEQ g, R6, switch
// Bad: g is not gsignal, not g0, not curg. What is it?
// Hide call from linker nosplit analysis.
MOVD $runtime·badsystemstack(SB), R3
BL (R3)
BL runtime·abort(SB)
switch:
// save our state in g->sched. Pretend to
// be systemstack_switch if the G stack is scanned.
BL gosave_systemstack_switch<>(SB)
// switch to g0
MOVD R5, g
BL runtime·save_g(SB)
MOVD (g_sched+gobuf_sp)(g), R15
// call target function
MOVD 0(R12), R3 // code pointer
BL (R3)
// switch back to g
MOVD g_m(g), R3
MOVD m_curg(R3), g
BL runtime·save_g(SB)
MOVD (g_sched+gobuf_sp)(g), R15
MOVD $0, (g_sched+gobuf_sp)(g)
RET
noswitch:
// already on m stack, just call directly
// Using a tail call here cleans up tracebacks since we won't stop
// at an intermediate systemstack.
MOVD 0(R12), R3 // code pointer
MOVD 0(R15), LR // restore LR
ADD $8, R15
BR (R3)
// func switchToCrashStack0(fn func())
TEXT runtime·switchToCrashStack0<ABIInternal>(SB), NOSPLIT, $0-8
MOVD fn+0(FP), R12 // context
MOVD g_m(g), R4 // curm
// set g to gcrash
MOVD $runtime·gcrash(SB), g // g = &gcrash
BL runtime·save_g(SB)
MOVD R4, g_m(g) // g.m = curm
MOVD g, m_g0(R4) // curm.g0 = g
// switch to crashstack
MOVD (g_stack+stack_hi)(g), R4
ADD $(-4*8), R4, R15
// call target function
MOVD 0(R12), R3 // code pointer
BL (R3)
// should never return
BL runtime·abort(SB)
UNDEF
/*
* support for morestack
*/
// Called during function prolog when more stack is needed.
// Caller has already loaded:
// R3: framesize, R4: argsize, R5: LR
//
// The traceback routines see morestack on a g0 as being
// the top of a stack (for example, morestack calling newstack
// calling the scheduler calling newm calling gc), so we must
// record an argument size. For that purpose, it has no arguments.
TEXT runtime·morestack(SB),NOSPLIT|NOFRAME,$0-0
// Called from f.
// Set g->sched to context in f.
MOVD R15, (g_sched+gobuf_sp)(g)
MOVD LR, R8
MOVD R8, (g_sched+gobuf_pc)(g)
MOVD R5, (g_sched+gobuf_lr)(g)
MOVD R12, (g_sched+gobuf_ctxt)(g)
// Cannot grow scheduler stack (m->g0).
MOVD g_m(g), R7
MOVD m_g0(R7), R8
CMPBNE g, R8, 3(PC)
BL runtime·badmorestackg0(SB)
BL runtime·abort(SB)
// Cannot grow signal stack (m->gsignal).
MOVD m_gsignal(R7), R8
CMP g, R8
BNE 3(PC)
BL runtime·badmorestackgsignal(SB)
BL runtime·abort(SB)
// Called from f.
// Set m->morebuf to f's caller.
MOVD R5, (m_morebuf+gobuf_pc)(R7) // f's caller's PC
MOVD R15, (m_morebuf+gobuf_sp)(R7) // f's caller's SP
MOVD g, (m_morebuf+gobuf_g)(R7)
// Call newstack on m->g0's stack.
MOVD m_g0(R7), g
BL runtime·save_g(SB)
MOVD (g_sched+gobuf_sp)(g), R15
// Create a stack frame on g0 to call newstack.
MOVD $0, -8(R15) // Zero saved LR in frame
SUB $8, R15
BL runtime·newstack(SB)
// Not reached, but make sure the return PC from the call to newstack
// is still in this function, and not the beginning of the next.
UNDEF
TEXT runtime·morestack_noctxt(SB),NOSPLIT|NOFRAME,$0-0
// Force SPWRITE. This function doesn't actually write SP,
// but it is called with a special calling convention where
// the caller doesn't save LR on stack but passes it as a
// register (R5), and the unwinder currently doesn't understand.
// Make it SPWRITE to stop unwinding. (See issue 54332)
MOVD R15, R15
MOVD $0, R12
BR runtime·morestack(SB)
// reflectcall: call a function with the given argument list
// func call(stackArgsType *_type, f *FuncVal, stackArgs *byte, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs).
// we don't have variable-sized frames, so we use a small number
// of constant-sized-frame functions to encode a few bits of size in the pc.
// Caution: ugly multiline assembly macros in your future!
#define DISPATCH(NAME,MAXSIZE) \
MOVD $MAXSIZE, R4; \
CMP R3, R4; \
BGT 3(PC); \
MOVD $NAME(SB), R5; \
BR (R5)
// Note: can't just "BR NAME(SB)" - bad inlining results.
TEXT ·reflectcall(SB), NOSPLIT, $-8-48
MOVWZ frameSize+32(FP), R3
DISPATCH(runtime·call16, 16)
DISPATCH(runtime·call32, 32)
DISPATCH(runtime·call64, 64)
DISPATCH(runtime·call128, 128)
DISPATCH(runtime·call256, 256)
DISPATCH(runtime·call512, 512)
DISPATCH(runtime·call1024, 1024)
DISPATCH(runtime·call2048, 2048)
DISPATCH(runtime·call4096, 4096)
DISPATCH(runtime·call8192, 8192)
DISPATCH(runtime·call16384, 16384)
DISPATCH(runtime·call32768, 32768)
DISPATCH(runtime·call65536, 65536)
DISPATCH(runtime·call131072, 131072)
DISPATCH(runtime·call262144, 262144)
DISPATCH(runtime·call524288, 524288)
DISPATCH(runtime·call1048576, 1048576)
DISPATCH(runtime·call2097152, 2097152)
DISPATCH(runtime·call4194304, 4194304)
DISPATCH(runtime·call8388608, 8388608)
DISPATCH(runtime·call16777216, 16777216)
DISPATCH(runtime·call33554432, 33554432)
DISPATCH(runtime·call67108864, 67108864)
DISPATCH(runtime·call134217728, 134217728)
DISPATCH(runtime·call268435456, 268435456)
DISPATCH(runtime·call536870912, 536870912)
DISPATCH(runtime·call1073741824, 1073741824)
MOVD $runtime·badreflectcall(SB), R5
BR (R5)
#define CALLFN(NAME,MAXSIZE) \
TEXT NAME(SB), WRAPPER, $MAXSIZE-48; \
NO_LOCAL_POINTERS; \
/* copy arguments to stack */ \
MOVD stackArgs+16(FP), R4; \
MOVWZ stackArgsSize+24(FP), R5; \
MOVD $stack-MAXSIZE(SP), R6; \
loopArgs: /* copy 256 bytes at a time */ \
CMP R5, $256; \
BLT tailArgs; \
SUB $256, R5; \
MVC $256, 0(R4), 0(R6); \
MOVD $256(R4), R4; \
MOVD $256(R6), R6; \
BR loopArgs; \
tailArgs: /* copy remaining bytes */ \
CMP R5, $0; \
BEQ callFunction; \
SUB $1, R5; \
EXRL $callfnMVC<>(SB), R5; \
callFunction: \
MOVD f+8(FP), R12; \
MOVD (R12), R8; \
PCDATA $PCDATA_StackMapIndex, $0; \
BL (R8); \
/* copy return values back */ \
MOVD stackArgsType+0(FP), R7; \
MOVD stackArgs+16(FP), R6; \
MOVWZ stackArgsSize+24(FP), R5; \
MOVD $stack-MAXSIZE(SP), R4; \
MOVWZ stackRetOffset+28(FP), R1; \
ADD R1, R4; \
ADD R1, R6; \
SUB R1, R5; \
BL callRet<>(SB); \
RET
// callRet copies return values back at the end of call*. This is a
// separate function so it can allocate stack space for the arguments
// to reflectcallmove. It does not follow the Go ABI; it expects its
// arguments in registers.
TEXT callRet<>(SB), NOSPLIT, $40-0
MOVD R7, 8(R15)
MOVD R6, 16(R15)
MOVD R4, 24(R15)
MOVD R5, 32(R15)
MOVD $0, 40(R15)
BL runtime·reflectcallmove(SB)
RET
CALLFN(·call16, 16)
CALLFN(·call32, 32)
CALLFN(·call64, 64)
CALLFN(·call128, 128)
CALLFN(·call256, 256)
CALLFN(·call512, 512)
CALLFN(·call1024, 1024)
CALLFN(·call2048, 2048)
CALLFN(·call4096, 4096)
CALLFN(·call8192, 8192)
CALLFN(·call16384, 16384)
CALLFN(·call32768, 32768)
CALLFN(·call65536, 65536)
CALLFN(·call131072, 131072)
CALLFN(·call262144, 262144)
CALLFN(·call524288, 524288)
CALLFN(·call1048576, 1048576)
CALLFN(·call2097152, 2097152)
CALLFN(·call4194304, 4194304)
CALLFN(·call8388608, 8388608)
CALLFN(·call16777216, 16777216)
CALLFN(·call33554432, 33554432)
CALLFN(·call67108864, 67108864)
CALLFN(·call134217728, 134217728)
CALLFN(·call268435456, 268435456)
CALLFN(·call536870912, 536870912)
CALLFN(·call1073741824, 1073741824)
// Not a function: target for EXRL (execute relative long) instruction.
TEXT callfnMVC<>(SB),NOSPLIT|NOFRAME,$0-0
MVC $1, 0(R4), 0(R6)
TEXT runtime·procyield(SB),NOSPLIT,$0-0
RET
// Save state of caller into g->sched,
// but using fake PC from systemstack_switch.
// Must only be called from functions with no locals ($0)
// or else unwinding from systemstack_switch is incorrect.
// Smashes R1.
TEXT gosave_systemstack_switch<>(SB),NOSPLIT|NOFRAME,$0
MOVD $runtime·systemstack_switch(SB), R1
ADD $16, R1 // get past prologue
MOVD R1, (g_sched+gobuf_pc)(g)
MOVD R15, (g_sched+gobuf_sp)(g)
MOVD $0, (g_sched+gobuf_lr)(g)
MOVD $0, (g_sched+gobuf_ret)(g)
// Assert ctxt is zero. See func save.
MOVD (g_sched+gobuf_ctxt)(g), R1
CMPBEQ R1, $0, 2(PC)
BL runtime·abort(SB)
RET
// func asmcgocall(fn, arg unsafe.Pointer) int32
// Call fn(arg) on the scheduler stack,
// aligned appropriately for the gcc ABI.
// See cgocall.go for more details.
TEXT ·asmcgocall(SB),NOSPLIT,$0-20
// R2 = argc; R3 = argv; R11 = temp; R13 = g; R15 = stack pointer
// C TLS base pointer in AR0:AR1
MOVD fn+0(FP), R3
MOVD arg+8(FP), R4
MOVD R15, R2 // save original stack pointer
MOVD g, R5
// Figure out if we need to switch to m->g0 stack.
// We get called to create new OS threads too, and those
// come in on the m->g0 stack already. Or we might already
// be on the m->gsignal stack.
MOVD g_m(g), R6
MOVD m_gsignal(R6), R7
CMPBEQ R7, g, g0
MOVD m_g0(R6), R7
CMPBEQ R7, g, g0
BL gosave_systemstack_switch<>(SB)
MOVD R7, g
BL runtime·save_g(SB)
MOVD (g_sched+gobuf_sp)(g), R15
// Now on a scheduling stack (a pthread-created stack).
g0:
// Save room for two of our pointers, plus 160 bytes of callee
// save area that lives on the caller stack.
SUB $176, R15
MOVD $~7, R6
AND R6, R15 // 8-byte alignment for gcc ABI
MOVD R5, 168(R15) // save old g on stack
MOVD (g_stack+stack_hi)(R5), R5
SUB R2, R5
MOVD R5, 160(R15) // save depth in old g stack (can't just save SP, as stack might be copied during a callback)
MOVD $0, 0(R15) // clear back chain pointer (TODO can we give it real back trace information?)
MOVD R4, R2 // arg in R2
BL R3 // can clobber: R0-R5, R14, F0-F3, F5, F7-F15
XOR R0, R0 // set R0 back to 0.
// Restore g, stack pointer.
MOVD 168(R15), g
BL runtime·save_g(SB)
MOVD (g_stack+stack_hi)(g), R5
MOVD 160(R15), R6
SUB R6, R5
MOVD R5, R15
MOVW R2, ret+16(FP)
RET
// cgocallback(fn, frame unsafe.Pointer, ctxt uintptr)
// See cgocall.go for more details.
TEXT ·cgocallback(SB),NOSPLIT,$24-24
NO_LOCAL_POINTERS
// Skip cgocallbackg, just dropm when fn is nil, and frame is the saved g.
// It is used to dropm while thread is exiting.
MOVD fn+0(FP), R1
CMPBNE R1, $0, loadg
// Restore the g from frame.
MOVD frame+8(FP), g
BR dropm
loadg:
// Load m and g from thread-local storage.
MOVB runtime·iscgo(SB), R3
CMPBEQ R3, $0, nocgo
BL runtime·load_g(SB)
nocgo:
// If g is nil, Go did not create the current thread,
// or if this thread never called into Go on pthread platforms.
// Call needm to obtain one for temporary use.
// In this case, we're running on the thread stack, so there's
// lots of space, but the linker doesn't know. Hide the call from
// the linker analysis by using an indirect call.
CMPBEQ g, $0, needm
MOVD g_m(g), R8
MOVD R8, savedm-8(SP)
BR havem
needm:
MOVD g, savedm-8(SP) // g is zero, so is m.
MOVD $runtime·needAndBindM(SB), R3
BL (R3)
// Set m->sched.sp = SP, so that if a panic happens
// during the function we are about to execute, it will
// have a valid SP to run on the g0 stack.
// The next few lines (after the havem label)
// will save this SP onto the stack and then write
// the same SP back to m->sched.sp. That seems redundant,
// but if an unrecovered panic happens, unwindm will
// restore the g->sched.sp from the stack location
// and then systemstack will try to use it. If we don't set it here,
// that restored SP will be uninitialized (typically 0) and
// will not be usable.
MOVD g_m(g), R8
MOVD m_g0(R8), R3
MOVD R15, (g_sched+gobuf_sp)(R3)
havem:
// Now there's a valid m, and we're running on its m->g0.
// Save current m->g0->sched.sp on stack and then set it to SP.
// Save current sp in m->g0->sched.sp in preparation for
// switch back to m->curg stack.
// NOTE: unwindm knows that the saved g->sched.sp is at 8(R1) aka savedsp-16(SP).
MOVD m_g0(R8), R3
MOVD (g_sched+gobuf_sp)(R3), R4
MOVD R4, savedsp-24(SP) // must match frame size
MOVD R15, (g_sched+gobuf_sp)(R3)
// Switch to m->curg stack and call runtime.cgocallbackg.
// Because we are taking over the execution of m->curg
// but *not* resuming what had been running, we need to
// save that information (m->curg->sched) so we can restore it.
// We can restore m->curg->sched.sp easily, because calling
// runtime.cgocallbackg leaves SP unchanged upon return.
// To save m->curg->sched.pc, we push it onto the curg stack and
// open a frame the same size as cgocallback's g0 frame.
// Once we switch to the curg stack, the pushed PC will appear
// to be the return PC of cgocallback, so that the traceback
// will seamlessly trace back into the earlier calls.
MOVD m_curg(R8), g
BL runtime·save_g(SB)
MOVD (g_sched+gobuf_sp)(g), R4 // prepare stack as R4
MOVD (g_sched+gobuf_pc)(g), R5
MOVD R5, -(24+8)(R4) // "saved LR"; must match frame size
// Gather our arguments into registers.
MOVD fn+0(FP), R1
MOVD frame+8(FP), R2
MOVD ctxt+16(FP), R3
MOVD $-(24+8)(R4), R15 // switch stack; must match frame size
MOVD R1, 8(R15)
MOVD R2, 16(R15)
MOVD R3, 24(R15)
BL runtime·cgocallbackg(SB)
// Restore g->sched (== m->curg->sched) from saved values.
MOVD 0(R15), R5
MOVD R5, (g_sched+gobuf_pc)(g)
MOVD $(24+8)(R15), R4 // must match frame size
MOVD R4, (g_sched+gobuf_sp)(g)
// Switch back to m->g0's stack and restore m->g0->sched.sp.
// (Unlike m->curg, the g0 goroutine never uses sched.pc,
// so we do not have to restore it.)
MOVD g_m(g), R8
MOVD m_g0(R8), g
BL runtime·save_g(SB)
MOVD (g_sched+gobuf_sp)(g), R15
MOVD savedsp-24(SP), R4 // must match frame size
MOVD R4, (g_sched+gobuf_sp)(g)
// If the m on entry was nil, we called needm above to borrow an m,
// 1. for the duration of the call on non-pthread platforms,
// 2. or the duration of the C thread alive on pthread platforms.
// If the m on entry wasn't nil,
// 1. the thread might be a Go thread,
// 2. or it wasn't the first call from a C thread on pthread platforms,
// since then we skip dropm to reuse the m in the first call.
MOVD savedm-8(SP), R6
CMPBNE R6, $0, droppedm
// Skip dropm to reuse it in the next call, when a pthread key has been created.
MOVD _cgo_pthread_key_created(SB), R6
// It means cgo is disabled when _cgo_pthread_key_created is a nil pointer, need dropm.
CMPBEQ R6, $0, dropm
MOVD (R6), R6
CMPBNE R6, $0, droppedm
dropm:
MOVD $runtime·dropm(SB), R3
BL (R3)
droppedm:
// Done!
RET
// void setg(G*); set g. for use by needm.
TEXT runtime·setg(SB), NOSPLIT, $0-8
MOVD gg+0(FP), g
// This only happens if iscgo, so jump straight to save_g
BL runtime·save_g(SB)
RET
// void setg_gcc(G*); set g in C TLS.
// Must obey the gcc calling convention.
TEXT setg_gcc<>(SB),NOSPLIT|NOFRAME,$0-0
// The standard prologue clobbers LR (R14), which is callee-save in
// the C ABI, so we have to use NOFRAME and save LR ourselves.
MOVD LR, R1
// Also save g, R10, and R11 since they're callee-save in C ABI
MOVD R10, R3
MOVD g, R4
MOVD R11, R5
MOVD R2, g
BL runtime·save_g(SB)
MOVD R5, R11
MOVD R4, g
MOVD R3, R10
MOVD R1, LR
RET
TEXT runtime·abort(SB),NOSPLIT|NOFRAME,$0-0
MOVW (R0), R0
UNDEF
// int64 runtime·cputicks(void)
TEXT runtime·cputicks(SB),NOSPLIT,$0-8
// The TOD clock on s390 counts from the year 1900 in ~250ps intervals.
// This means that since about 1972 the msb has been set, making the
// result of a call to STORE CLOCK (stck) a negative number.
// We clear the msb to make it positive.
STCK ret+0(FP) // serialises before and after call
MOVD ret+0(FP), R3 // R3 will wrap to 0 in the year 2043
SLD $1, R3
SRD $1, R3
MOVD R3, ret+0(FP)
RET
// AES hashing not implemented for s390x
TEXT runtime·memhash(SB),NOSPLIT|NOFRAME,$0-32
JMP runtime·memhashFallback(SB)
TEXT runtime·strhash(SB),NOSPLIT|NOFRAME,$0-24
JMP runtime·strhashFallback(SB)
TEXT runtime·memhash32(SB),NOSPLIT|NOFRAME,$0-24
JMP runtime·memhash32Fallback(SB)
TEXT runtime·memhash64(SB),NOSPLIT|NOFRAME,$0-24
JMP runtime·memhash64Fallback(SB)
TEXT runtime·return0(SB), NOSPLIT, $0
MOVW $0, R3
RET
// Called from cgo wrappers, this function returns g->m->curg.stack.hi.
// Must obey the gcc calling convention.
TEXT _cgo_topofstack(SB),NOSPLIT|NOFRAME,$0
// g (R13), R10, R11 and LR (R14) are callee-save in the C ABI, so save them
MOVD g, R1
MOVD R10, R3
MOVD LR, R4
MOVD R11, R5
BL runtime·load_g(SB) // clobbers g (R13), R10, R11
MOVD g_m(g), R2
MOVD m_curg(R2), R2
MOVD (g_stack+stack_hi)(R2), R2
MOVD R1, g
MOVD R3, R10
MOVD R4, LR
MOVD R5, R11
RET
// The top-most function running on a goroutine
// returns to goexit+PCQuantum.
TEXT runtime·goexit(SB),NOSPLIT|NOFRAME|TOPFRAME,$0-0
BYTE $0x07; BYTE $0x00; // 2-byte nop
BL runtime·goexit1(SB) // does not return
// traceback from goexit1 must hit code range of goexit
BYTE $0x07; BYTE $0x00; // 2-byte nop
TEXT ·publicationBarrier(SB),NOSPLIT|NOFRAME,$0-0
// Stores are already ordered on s390x, so this is just a
// compile barrier.
RET
// This is called from .init_array and follows the platform, not Go, ABI.
// We are overly conservative. We could only save the registers we use.
// However, since this function is only called once per loaded module
// performance is unimportant.
TEXT runtime·addmoduledata(SB),NOSPLIT|NOFRAME,$0-0
// Save R6-R15 in the register save area of the calling function.
// Don't bother saving F8-F15 as we aren't doing any calls.
STMG R6, R15, 48(R15)
// append the argument (passed in R2, as per the ELF ABI) to the
// moduledata linked list.
MOVD runtime·lastmoduledatap(SB), R1
MOVD R2, moduledata_next(R1)
MOVD R2, runtime·lastmoduledatap(SB)
// Restore R6-R15.
LMG 48(R15), R6, R15
RET
TEXT ·checkASM(SB),NOSPLIT,$0-1
MOVB $1, ret+0(FP)
RET
// gcWriteBarrier informs the GC about heap pointer writes.
//
// gcWriteBarrier does NOT follow the Go ABI. It accepts the
// number of bytes of buffer needed in R9, and returns a pointer
// to the buffer space in R9.
// It clobbers R10 (the temp register) and R1 (used by PLT stub).
// It does not clobber any other general-purpose registers,
// but may clobber others (e.g., floating point registers).
TEXT gcWriteBarrier<>(SB),NOSPLIT,$96
// Save the registers clobbered by the fast path.
MOVD R4, 96(R15)
retry:
MOVD g_m(g), R1
MOVD m_p(R1), R1
// Increment wbBuf.next position.
MOVD R9, R4
ADD (p_wbBuf+wbBuf_next)(R1), R4
// Is the buffer full?
MOVD (p_wbBuf+wbBuf_end)(R1), R10
CMPUBGT R4, R10, flush
// Commit to the larger buffer.
MOVD R4, (p_wbBuf+wbBuf_next)(R1)
// Make return value (the original next position)
SUB R9, R4, R9
// Restore registers.
MOVD 96(R15), R4
RET
flush:
// Save all general purpose registers since these could be
// clobbered by wbBufFlush and were not saved by the caller.
STMG R2, R3, 8(R15)
MOVD R0, 24(R15)
// R1 already saved.
// R4 already saved.
STMG R5, R12, 32(R15) // save R5 - R12
// R13 is g.
// R14 is LR.
// R15 is SP.
CALL runtime·wbBufFlush(SB)
LMG 8(R15), R2, R3 // restore R2 - R3
MOVD 24(R15), R0 // restore R0
LMG 32(R15), R5, R12 // restore R5 - R12
JMP retry
TEXT runtime·gcWriteBarrier1<ABIInternal>(SB),NOSPLIT,$0
MOVD $8, R9
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier2<ABIInternal>(SB),NOSPLIT,$0
MOVD $16, R9
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier3<ABIInternal>(SB),NOSPLIT,$0
MOVD $24, R9
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier4<ABIInternal>(SB),NOSPLIT,$0
MOVD $32, R9
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier5<ABIInternal>(SB),NOSPLIT,$0
MOVD $40, R9
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier6<ABIInternal>(SB),NOSPLIT,$0
MOVD $48, R9
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier7<ABIInternal>(SB),NOSPLIT,$0
MOVD $56, R9
JMP gcWriteBarrier<>(SB)
TEXT runtime·gcWriteBarrier8<ABIInternal>(SB),NOSPLIT,$0
MOVD $64, R9
JMP gcWriteBarrier<>(SB)
// Note: these functions use a special calling convention to save generated code space.
// Arguments are passed in registers, but the space for those arguments are allocated
// in the caller's stack frame. These stubs write the args into that stack space and
// then tail call to the corresponding runtime handler.
// The tail call makes these stubs disappear in backtraces.
TEXT runtime·panicIndex(SB),NOSPLIT,$0-16
MOVD R0, x+0(FP)
MOVD R1, y+8(FP)
JMP runtime·goPanicIndex(SB)
TEXT runtime·panicIndexU(SB),NOSPLIT,$0-16
MOVD R0, x+0(FP)
MOVD R1, y+8(FP)
JMP runtime·goPanicIndexU(SB)
TEXT runtime·panicSliceAlen(SB),NOSPLIT,$0-16
MOVD R1, x+0(FP)
MOVD R2, y+8(FP)
JMP runtime·goPanicSliceAlen(SB)
TEXT runtime·panicSliceAlenU(SB),NOSPLIT,$0-16
MOVD R1, x+0(FP)
MOVD R2, y+8(FP)
JMP runtime·goPanicSliceAlenU(SB)
TEXT runtime·panicSliceAcap(SB),NOSPLIT,$0-16
MOVD R1, x+0(FP)
MOVD R2, y+8(FP)
JMP runtime·goPanicSliceAcap(SB)
TEXT runtime·panicSliceAcapU(SB),NOSPLIT,$0-16
MOVD R1, x+0(FP)
MOVD R2, y+8(FP)
JMP runtime·goPanicSliceAcapU(SB)
TEXT runtime·panicSliceB(SB),NOSPLIT,$0-16
MOVD R0, x+0(FP)
MOVD R1, y+8(FP)
JMP runtime·goPanicSliceB(SB)
TEXT runtime·panicSliceBU(SB),NOSPLIT,$0-16
MOVD R0, x+0(FP)
MOVD R1, y+8(FP)
JMP runtime·goPanicSliceBU(SB)
TEXT runtime·panicSlice3Alen(SB),NOSPLIT,$0-16
MOVD R2, x+0(FP)
MOVD R3, y+8(FP)
JMP runtime·goPanicSlice3Alen(SB)
TEXT runtime·panicSlice3AlenU(SB),NOSPLIT,$0-16
MOVD R2, x+0(FP)
MOVD R3, y+8(FP)
JMP runtime·goPanicSlice3AlenU(SB)
TEXT runtime·panicSlice3Acap(SB),NOSPLIT,$0-16
MOVD R2, x+0(FP)
MOVD R3, y+8(FP)
JMP runtime·goPanicSlice3Acap(SB)
TEXT runtime·panicSlice3AcapU(SB),NOSPLIT,$0-16
MOVD R2, x+0(FP)
MOVD R3, y+8(FP)
JMP runtime·goPanicSlice3AcapU(SB)
TEXT runtime·panicSlice3B(SB),NOSPLIT,$0-16
MOVD R1, x+0(FP)
MOVD R2, y+8(FP)
JMP runtime·goPanicSlice3B(SB)
TEXT runtime·panicSlice3BU(SB),NOSPLIT,$0-16
MOVD R1, x+0(FP)
MOVD R2, y+8(FP)
JMP runtime·goPanicSlice3BU(SB)
TEXT runtime·panicSlice3C(SB),NOSPLIT,$0-16
MOVD R0, x+0(FP)
MOVD R1, y+8(FP)
JMP runtime·goPanicSlice3C(SB)
TEXT runtime·panicSlice3CU(SB),NOSPLIT,$0-16
MOVD R0, x+0(FP)
MOVD R1, y+8(FP)
JMP runtime·goPanicSlice3CU(SB)
TEXT runtime·panicSliceConvert(SB),NOSPLIT,$0-16
MOVD R2, x+0(FP)
MOVD R3, y+8(FP)
JMP runtime·goPanicSliceConvert(SB)

558
src/runtime/asm_wasm.s Normal file
View File

@@ -0,0 +1,558 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include "go_asm.h"
#include "go_tls.h"
#include "funcdata.h"
#include "textflag.h"
TEXT runtime·rt0_go(SB), NOSPLIT|NOFRAME|TOPFRAME, $0
// save m->g0 = g0
MOVD $runtime·g0(SB), runtime·m0+m_g0(SB)
// save m0 to g0->m
MOVD $runtime·m0(SB), runtime·g0+g_m(SB)
// set g to g0
MOVD $runtime·g0(SB), g
CALLNORESUME runtime·check(SB)
#ifdef GOOS_js
CALLNORESUME runtime·args(SB)
#endif
CALLNORESUME runtime·osinit(SB)
CALLNORESUME runtime·schedinit(SB)
MOVD $runtime·mainPC(SB), 0(SP)
CALLNORESUME runtime·newproc(SB)
CALL runtime·mstart(SB) // WebAssembly stack will unwind when switching to another goroutine
UNDEF
TEXT runtime·mstart(SB),NOSPLIT|TOPFRAME,$0
CALL runtime·mstart0(SB)
RET // not reached
DATA runtime·mainPC+0(SB)/8,$runtime·main(SB)
GLOBL runtime·mainPC(SB),RODATA,$8
// func checkASM() bool
TEXT ·checkASM(SB), NOSPLIT, $0-1
MOVB $1, ret+0(FP)
RET
TEXT runtime·gogo(SB), NOSPLIT, $0-8
MOVD buf+0(FP), R0
MOVD gobuf_g(R0), R1
MOVD 0(R1), R2 // make sure g != nil
MOVD R1, g
MOVD gobuf_sp(R0), SP
// Put target PC at -8(SP), wasm_pc_f_loop will pick it up
Get SP
I32Const $8
I32Sub
I64Load gobuf_pc(R0)
I64Store $0
MOVD gobuf_ret(R0), RET0
MOVD gobuf_ctxt(R0), CTXT
// clear to help garbage collector
MOVD $0, gobuf_sp(R0)
MOVD $0, gobuf_ret(R0)
MOVD $0, gobuf_ctxt(R0)
I32Const $1
Return
// func mcall(fn func(*g))
// Switch to m->g0's stack, call fn(g).
// Fn must never return. It should gogo(&g->sched)
// to keep running g.
TEXT runtime·mcall(SB), NOSPLIT, $0-8
// CTXT = fn
MOVD fn+0(FP), CTXT
// R1 = g.m
MOVD g_m(g), R1
// R2 = g0
MOVD m_g0(R1), R2
// save state in g->sched
MOVD 0(SP), g_sched+gobuf_pc(g) // caller's PC
MOVD $fn+0(FP), g_sched+gobuf_sp(g) // caller's SP
// if g == g0 call badmcall
Get g
Get R2
I64Eq
If
JMP runtime·badmcall(SB)
End
// switch to g0's stack
I64Load (g_sched+gobuf_sp)(R2)
I64Const $8
I64Sub
I32WrapI64
Set SP
// set arg to current g
MOVD g, 0(SP)
// switch to g0
MOVD R2, g
// call fn
Get CTXT
I32WrapI64
I64Load $0
CALL
Get SP
I32Const $8
I32Add
Set SP
JMP runtime·badmcall2(SB)
// func systemstack(fn func())
TEXT runtime·systemstack(SB), NOSPLIT, $0-8
// R0 = fn
MOVD fn+0(FP), R0
// R1 = g.m
MOVD g_m(g), R1
// R2 = g0
MOVD m_g0(R1), R2
// if g == g0
Get g
Get R2
I64Eq
If
// no switch:
MOVD R0, CTXT
Get CTXT
I32WrapI64
I64Load $0
JMP
End
// if g != m.curg
Get g
I64Load m_curg(R1)
I64Ne
If
CALLNORESUME runtime·badsystemstack(SB)
CALLNORESUME runtime·abort(SB)
End
// switch:
// save state in g->sched. Pretend to
// be systemstack_switch if the G stack is scanned.
MOVD $runtime·systemstack_switch(SB), g_sched+gobuf_pc(g)
MOVD SP, g_sched+gobuf_sp(g)
// switch to g0
MOVD R2, g
// make it look like mstart called systemstack on g0, to stop traceback
I64Load (g_sched+gobuf_sp)(R2)
I64Const $8
I64Sub
Set R3
MOVD $runtime·mstart(SB), 0(R3)
MOVD R3, SP
// call fn
MOVD R0, CTXT
Get CTXT
I32WrapI64
I64Load $0
CALL
// switch back to g
MOVD g_m(g), R1
MOVD m_curg(R1), R2
MOVD R2, g
MOVD g_sched+gobuf_sp(R2), SP
MOVD $0, g_sched+gobuf_sp(R2)
RET
TEXT runtime·systemstack_switch(SB), NOSPLIT, $0-0
RET
TEXT runtime·abort(SB),NOSPLIT|NOFRAME,$0-0
UNDEF
// AES hashing not implemented for wasm
TEXT runtime·memhash(SB),NOSPLIT|NOFRAME,$0-32
JMP runtime·memhashFallback(SB)
TEXT runtime·strhash(SB),NOSPLIT|NOFRAME,$0-24
JMP runtime·strhashFallback(SB)
TEXT runtime·memhash32(SB),NOSPLIT|NOFRAME,$0-24
JMP runtime·memhash32Fallback(SB)
TEXT runtime·memhash64(SB),NOSPLIT|NOFRAME,$0-24
JMP runtime·memhash64Fallback(SB)
TEXT runtime·return0(SB), NOSPLIT, $0-0
MOVD $0, RET0
RET
TEXT runtime·asminit(SB), NOSPLIT, $0-0
// No per-thread init.
RET
TEXT ·publicationBarrier(SB), NOSPLIT, $0-0
RET
TEXT runtime·procyield(SB), NOSPLIT, $0-0 // FIXME
RET
TEXT runtime·breakpoint(SB), NOSPLIT, $0-0
UNDEF
// func switchToCrashStack0(fn func())
TEXT runtime·switchToCrashStack0(SB), NOSPLIT, $0-8
MOVD fn+0(FP), CTXT // context register
MOVD g_m(g), R2 // curm
// set g to gcrash
MOVD $runtime·gcrash(SB), g // g = &gcrash
MOVD R2, g_m(g) // g.m = curm
MOVD g, m_g0(R2) // curm.g0 = g
// switch to crashstack
I64Load (g_stack+stack_hi)(g)
I64Const $(-4*8)
I64Add
I32WrapI64
Set SP
// call target function
Get CTXT
I32WrapI64
I64Load $0
CALL
// should never return
CALL runtime·abort(SB)
UNDEF
// Called during function prolog when more stack is needed.
//
// The traceback routines see morestack on a g0 as being
// the top of a stack (for example, morestack calling newstack
// calling the scheduler calling newm calling gc), so we must
// record an argument size. For that purpose, it has no arguments.
TEXT runtime·morestack(SB), NOSPLIT, $0-0
// R1 = g.m
MOVD g_m(g), R1
// R2 = g0
MOVD m_g0(R1), R2
// Set g->sched to context in f.
NOP SP // tell vet SP changed - stop checking offsets
MOVD 0(SP), g_sched+gobuf_pc(g)
MOVD $8(SP), g_sched+gobuf_sp(g) // f's SP
MOVD CTXT, g_sched+gobuf_ctxt(g)
// Cannot grow scheduler stack (m->g0).
Get g
Get R2
I64Eq
If
CALLNORESUME runtime·badmorestackg0(SB)
CALLNORESUME runtime·abort(SB)
End
// Cannot grow signal stack (m->gsignal).
Get g
I64Load m_gsignal(R1)
I64Eq
If
CALLNORESUME runtime·badmorestackgsignal(SB)
CALLNORESUME runtime·abort(SB)
End
// Called from f.
// Set m->morebuf to f's caller.
MOVD 8(SP), m_morebuf+gobuf_pc(R1)
MOVD $16(SP), m_morebuf+gobuf_sp(R1) // f's caller's SP
MOVD g, m_morebuf+gobuf_g(R1)
// Call newstack on m->g0's stack.
MOVD R2, g
MOVD g_sched+gobuf_sp(R2), SP
CALL runtime·newstack(SB)
UNDEF // crash if newstack returns
// morestack but not preserving ctxt.
TEXT runtime·morestack_noctxt(SB),NOSPLIT,$0
MOVD $0, CTXT
JMP runtime·morestack(SB)
TEXT ·asmcgocall(SB), NOSPLIT, $0-0
UNDEF
#define DISPATCH(NAME, MAXSIZE) \
Get R0; \
I64Const $MAXSIZE; \
I64LeU; \
If; \
JMP NAME(SB); \
End
TEXT ·reflectcall(SB), NOSPLIT, $0-48
I64Load fn+8(FP)
I64Eqz
If
CALLNORESUME runtime·sigpanic<ABIInternal>(SB)
End
MOVW frameSize+32(FP), R0
DISPATCH(runtime·call16, 16)
DISPATCH(runtime·call32, 32)
DISPATCH(runtime·call64, 64)
DISPATCH(runtime·call128, 128)
DISPATCH(runtime·call256, 256)
DISPATCH(runtime·call512, 512)
DISPATCH(runtime·call1024, 1024)
DISPATCH(runtime·call2048, 2048)
DISPATCH(runtime·call4096, 4096)
DISPATCH(runtime·call8192, 8192)
DISPATCH(runtime·call16384, 16384)
DISPATCH(runtime·call32768, 32768)
DISPATCH(runtime·call65536, 65536)
DISPATCH(runtime·call131072, 131072)
DISPATCH(runtime·call262144, 262144)
DISPATCH(runtime·call524288, 524288)
DISPATCH(runtime·call1048576, 1048576)
DISPATCH(runtime·call2097152, 2097152)
DISPATCH(runtime·call4194304, 4194304)
DISPATCH(runtime·call8388608, 8388608)
DISPATCH(runtime·call16777216, 16777216)
DISPATCH(runtime·call33554432, 33554432)
DISPATCH(runtime·call67108864, 67108864)
DISPATCH(runtime·call134217728, 134217728)
DISPATCH(runtime·call268435456, 268435456)
DISPATCH(runtime·call536870912, 536870912)
DISPATCH(runtime·call1073741824, 1073741824)
JMP runtime·badreflectcall(SB)
#define CALLFN(NAME, MAXSIZE) \
TEXT NAME(SB), WRAPPER, $MAXSIZE-48; \
NO_LOCAL_POINTERS; \
MOVW stackArgsSize+24(FP), R0; \
\
Get R0; \
I64Eqz; \
Not; \
If; \
Get SP; \
I64Load stackArgs+16(FP); \
I32WrapI64; \
I64Load stackArgsSize+24(FP); \
I32WrapI64; \
MemoryCopy; \
End; \
\
MOVD f+8(FP), CTXT; \
Get CTXT; \
I32WrapI64; \
I64Load $0; \
CALL; \
\
I64Load32U stackRetOffset+28(FP); \
Set R0; \
\
MOVD stackArgsType+0(FP), RET0; \
\
I64Load stackArgs+16(FP); \
Get R0; \
I64Add; \
Set RET1; \
\
Get SP; \
I64ExtendI32U; \
Get R0; \
I64Add; \
Set RET2; \
\
I64Load32U stackArgsSize+24(FP); \
Get R0; \
I64Sub; \
Set RET3; \
\
CALL callRet<>(SB); \
RET
// callRet copies return values back at the end of call*. This is a
// separate function so it can allocate stack space for the arguments
// to reflectcallmove. It does not follow the Go ABI; it expects its
// arguments in registers.
TEXT callRet<>(SB), NOSPLIT, $40-0
NO_LOCAL_POINTERS
MOVD RET0, 0(SP)
MOVD RET1, 8(SP)
MOVD RET2, 16(SP)
MOVD RET3, 24(SP)
MOVD $0, 32(SP)
CALL runtime·reflectcallmove(SB)
RET
CALLFN(·call16, 16)
CALLFN(·call32, 32)
CALLFN(·call64, 64)
CALLFN(·call128, 128)
CALLFN(·call256, 256)
CALLFN(·call512, 512)
CALLFN(·call1024, 1024)
CALLFN(·call2048, 2048)
CALLFN(·call4096, 4096)
CALLFN(·call8192, 8192)
CALLFN(·call16384, 16384)
CALLFN(·call32768, 32768)
CALLFN(·call65536, 65536)
CALLFN(·call131072, 131072)
CALLFN(·call262144, 262144)
CALLFN(·call524288, 524288)
CALLFN(·call1048576, 1048576)
CALLFN(·call2097152, 2097152)
CALLFN(·call4194304, 4194304)
CALLFN(·call8388608, 8388608)
CALLFN(·call16777216, 16777216)
CALLFN(·call33554432, 33554432)
CALLFN(·call67108864, 67108864)
CALLFN(·call134217728, 134217728)
CALLFN(·call268435456, 268435456)
CALLFN(·call536870912, 536870912)
CALLFN(·call1073741824, 1073741824)
TEXT runtime·goexit(SB), NOSPLIT|TOPFRAME, $0-0
NOP // first PC of goexit is skipped
CALL runtime·goexit1(SB) // does not return
UNDEF
TEXT runtime·cgocallback(SB), NOSPLIT, $0-24
UNDEF
// gcWriteBarrier informs the GC about heap pointer writes.
//
// gcWriteBarrier does NOT follow the Go ABI. It accepts the
// number of bytes of buffer needed as a wasm argument
// (put on the TOS by the caller, lives in local R0 in this body)
// and returns a pointer to the buffer space as a wasm result
// (left on the TOS in this body, appears on the wasm stack
// in the caller).
TEXT gcWriteBarrier<>(SB), NOSPLIT, $0
Loop
// R3 = g.m
MOVD g_m(g), R3
// R4 = p
MOVD m_p(R3), R4
// R5 = wbBuf.next
MOVD p_wbBuf+wbBuf_next(R4), R5
// Increment wbBuf.next
Get R5
Get R0
I64Add
Set R5
// Is the buffer full?
Get R5
I64Load (p_wbBuf+wbBuf_end)(R4)
I64LeU
If
// Commit to the larger buffer.
MOVD R5, p_wbBuf+wbBuf_next(R4)
// Make return value (the original next position)
Get R5
Get R0
I64Sub
Return
End
// Flush
CALLNORESUME runtime·wbBufFlush(SB)
// Retry
Br $0
End
TEXT runtime·gcWriteBarrier1<ABIInternal>(SB),NOSPLIT,$0
I64Const $8
Call gcWriteBarrier<>(SB)
Return
TEXT runtime·gcWriteBarrier2<ABIInternal>(SB),NOSPLIT,$0
I64Const $16
Call gcWriteBarrier<>(SB)
Return
TEXT runtime·gcWriteBarrier3<ABIInternal>(SB),NOSPLIT,$0
I64Const $24
Call gcWriteBarrier<>(SB)
Return
TEXT runtime·gcWriteBarrier4<ABIInternal>(SB),NOSPLIT,$0
I64Const $32
Call gcWriteBarrier<>(SB)
Return
TEXT runtime·gcWriteBarrier5<ABIInternal>(SB),NOSPLIT,$0
I64Const $40
Call gcWriteBarrier<>(SB)
Return
TEXT runtime·gcWriteBarrier6<ABIInternal>(SB),NOSPLIT,$0
I64Const $48
Call gcWriteBarrier<>(SB)
Return
TEXT runtime·gcWriteBarrier7<ABIInternal>(SB),NOSPLIT,$0
I64Const $56
Call gcWriteBarrier<>(SB)
Return
TEXT runtime·gcWriteBarrier8<ABIInternal>(SB),NOSPLIT,$0
I64Const $64
Call gcWriteBarrier<>(SB)
Return
TEXT wasm_pc_f_loop(SB),NOSPLIT,$0
// Call the function for the current PC_F. Repeat until PAUSE != 0 indicates pause or exit.
// The WebAssembly stack may unwind, e.g. when switching goroutines.
// The Go stack on the linear memory is then used to jump to the correct functions
// with this loop, without having to restore the full WebAssembly stack.
// It is expected to have a pending call before entering the loop, so check PAUSE first.
Get PAUSE
I32Eqz
If
loop:
Loop
// Get PC_B & PC_F from -8(SP)
Get SP
I32Const $8
I32Sub
I32Load16U $0 // PC_B
Get SP
I32Const $8
I32Sub
I32Load16U $2 // PC_F
CallIndirect $0
Drop
Get PAUSE
I32Eqz
BrIf loop
End
End
I32Const $0
Set PAUSE
Return
TEXT wasm_export_lib(SB),NOSPLIT,$0
UNDEF

View File

@@ -0,0 +1,9 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include "textflag.h"
TEXT ·publicationBarrier(SB),NOSPLIT|NOFRAME,$0-0
DMB $0xe // DMB ST
RET

View File

@@ -0,0 +1,9 @@
// Copyright 2022 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include "textflag.h"
TEXT ·publicationBarrier(SB),NOSPLIT|NOFRAME,$0-0
DBAR
RET

View File

@@ -0,0 +1,13 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build mips64 || mips64le
#include "textflag.h"
#define SYNC WORD $0xf
TEXT ·publicationBarrier(SB),NOSPLIT|NOFRAME,$0-0
SYNC
RET

View File

@@ -0,0 +1,11 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build mips || mipsle
#include "textflag.h"
TEXT ·publicationBarrier(SB),NOSPLIT,$0
SYNC
RET

View File

@@ -0,0 +1,124 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package runtime
import (
"internal/goexperiment"
"internal/runtime/atomic"
"unsafe"
)
// These functions cannot have go:noescape annotations,
// because while ptr does not escape, new does.
// If new is marked as not escaping, the compiler will make incorrect
// escape analysis decisions about the pointer value being stored.
// atomicwb performs a write barrier before an atomic pointer write.
// The caller should guard the call with "if writeBarrier.enabled".
//
// atomicwb should be an internal detail,
// but widely used packages access it using linkname.
// Notable members of the hall of shame include:
// - github.com/bytedance/gopkg
// - github.com/songzhibin97/gkit
//
// Do not remove or change the type signature.
// See go.dev/issue/67401.
//
//go:linkname atomicwb
//go:nosplit
func atomicwb(ptr *unsafe.Pointer, new unsafe.Pointer) {
slot := (*uintptr)(unsafe.Pointer(ptr))
buf := getg().m.p.ptr().wbBuf.get2()
buf[0] = *slot
buf[1] = uintptr(new)
}
// atomicstorep performs *ptr = new atomically and invokes a write barrier.
//
//go:nosplit
func atomicstorep(ptr unsafe.Pointer, new unsafe.Pointer) {
if writeBarrier.enabled {
atomicwb((*unsafe.Pointer)(ptr), new)
}
if goexperiment.CgoCheck2 {
cgoCheckPtrWrite((*unsafe.Pointer)(ptr), new)
}
atomic.StorepNoWB(noescape(ptr), new)
}
// atomic_storePointer is the implementation of runtime/internal/UnsafePointer.Store
// (like StoreNoWB but with the write barrier).
//
//go:nosplit
//go:linkname atomic_storePointer internal/runtime/atomic.storePointer
func atomic_storePointer(ptr *unsafe.Pointer, new unsafe.Pointer) {
atomicstorep(unsafe.Pointer(ptr), new)
}
// atomic_casPointer is the implementation of runtime/internal/UnsafePointer.CompareAndSwap
// (like CompareAndSwapNoWB but with the write barrier).
//
//go:nosplit
//go:linkname atomic_casPointer internal/runtime/atomic.casPointer
func atomic_casPointer(ptr *unsafe.Pointer, old, new unsafe.Pointer) bool {
if writeBarrier.enabled {
atomicwb(ptr, new)
}
if goexperiment.CgoCheck2 {
cgoCheckPtrWrite(ptr, new)
}
return atomic.Casp1(ptr, old, new)
}
// Like above, but implement in terms of sync/atomic's uintptr operations.
// We cannot just call the runtime routines, because the race detector expects
// to be able to intercept the sync/atomic forms but not the runtime forms.
//go:linkname sync_atomic_StoreUintptr sync/atomic.StoreUintptr
func sync_atomic_StoreUintptr(ptr *uintptr, new uintptr)
//go:linkname sync_atomic_StorePointer sync/atomic.StorePointer
//go:nosplit
func sync_atomic_StorePointer(ptr *unsafe.Pointer, new unsafe.Pointer) {
if writeBarrier.enabled {
atomicwb(ptr, new)
}
if goexperiment.CgoCheck2 {
cgoCheckPtrWrite(ptr, new)
}
sync_atomic_StoreUintptr((*uintptr)(unsafe.Pointer(ptr)), uintptr(new))
}
//go:linkname sync_atomic_SwapUintptr sync/atomic.SwapUintptr
func sync_atomic_SwapUintptr(ptr *uintptr, new uintptr) uintptr
//go:linkname sync_atomic_SwapPointer sync/atomic.SwapPointer
//go:nosplit
func sync_atomic_SwapPointer(ptr *unsafe.Pointer, new unsafe.Pointer) unsafe.Pointer {
if writeBarrier.enabled {
atomicwb(ptr, new)
}
if goexperiment.CgoCheck2 {
cgoCheckPtrWrite(ptr, new)
}
old := unsafe.Pointer(sync_atomic_SwapUintptr((*uintptr)(noescape(unsafe.Pointer(ptr))), uintptr(new)))
return old
}
//go:linkname sync_atomic_CompareAndSwapUintptr sync/atomic.CompareAndSwapUintptr
func sync_atomic_CompareAndSwapUintptr(ptr *uintptr, old, new uintptr) bool
//go:linkname sync_atomic_CompareAndSwapPointer sync/atomic.CompareAndSwapPointer
//go:nosplit
func sync_atomic_CompareAndSwapPointer(ptr *unsafe.Pointer, old, new unsafe.Pointer) bool {
if writeBarrier.enabled {
atomicwb(ptr, new)
}
if goexperiment.CgoCheck2 {
cgoCheckPtrWrite(ptr, new)
}
return sync_atomic_CompareAndSwapUintptr((*uintptr)(noescape(unsafe.Pointer(ptr))), uintptr(old), uintptr(new))
}

View File

@@ -0,0 +1,14 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build ppc64 || ppc64le
#include "textflag.h"
TEXT ·publicationBarrier(SB),NOSPLIT|NOFRAME,$0-0
// LWSYNC is the "export" barrier recommended by Power ISA
// v2.07 book II, appendix B.2.2.2.
// LWSYNC is a load/load, load/store, and store/store barrier.
LWSYNC
RET

View File

@@ -0,0 +1,10 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include "textflag.h"
// func publicationBarrier()
TEXT ·publicationBarrier(SB),NOSPLIT|NOFRAME,$0-0
FENCE
RET

10
src/runtime/auxv_none.go Normal file
View File

@@ -0,0 +1,10 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build !linux && !darwin && !dragonfly && !freebsd && !netbsd && !solaris
package runtime
func sysargs(argc int32, argv **byte) {
}

View File

@@ -0,0 +1,22 @@
// Copyright 2024 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package runtime
import _ "unsafe"
// These should be an internal details
// but widely used packages access them using linkname.
// Do not remove or change the type signature.
// See go.dev/issue/67401.
// Notable members of the hall of shame include:
// - github.com/dgraph-io/ristretto
// - github.com/outcaste-io/ristretto
// - github.com/clubpay/ronykit
//go:linkname cputicks
// Notable members of the hall of shame include:
// - gvisor.dev/gvisor (from assembly)
//go:linkname sched

View File

@@ -0,0 +1,17 @@
// Copyright 2024 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build amd64 || arm64
package runtime
import _ "unsafe"
// As of Go 1.22, the symbols below are found to be pulled via
// linkname in the wild. We provide a push linkname here, to
// keep them accessible with pull linknames.
// This may change in the future. Please do not depend on them
// in new code.
//go:linkname vdsoClockgettimeSym

489
src/runtime/callers_test.go Normal file
View File

@@ -0,0 +1,489 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package runtime_test
import (
"reflect"
"runtime"
"strings"
"testing"
)
func f1(pan bool) []uintptr {
return f2(pan) // line 15
}
func f2(pan bool) []uintptr {
return f3(pan) // line 19
}
func f3(pan bool) []uintptr {
if pan {
panic("f3") // line 24
}
ret := make([]uintptr, 20)
return ret[:runtime.Callers(0, ret)] // line 27
}
func testCallers(t *testing.T, pcs []uintptr, pan bool) {
m := make(map[string]int, len(pcs))
frames := runtime.CallersFrames(pcs)
for {
frame, more := frames.Next()
if frame.Function != "" {
m[frame.Function] = frame.Line
}
if !more {
break
}
}
var seen []string
for k := range m {
seen = append(seen, k)
}
t.Logf("functions seen: %s", strings.Join(seen, " "))
var f3Line int
if pan {
f3Line = 24
} else {
f3Line = 27
}
want := []struct {
name string
line int
}{
{"f1", 15},
{"f2", 19},
{"f3", f3Line},
}
for _, w := range want {
if got := m["runtime_test."+w.name]; got != w.line {
t.Errorf("%s is line %d, want %d", w.name, got, w.line)
}
}
}
func testCallersEqual(t *testing.T, pcs []uintptr, want []string) {
t.Helper()
got := make([]string, 0, len(want))
frames := runtime.CallersFrames(pcs)
for {
frame, more := frames.Next()
if !more || len(got) >= len(want) {
break
}
got = append(got, frame.Function)
}
if !reflect.DeepEqual(want, got) {
t.Fatalf("wanted %v, got %v", want, got)
}
}
func TestCallers(t *testing.T) {
testCallers(t, f1(false), false)
}
func TestCallersPanic(t *testing.T) {
// Make sure we don't have any extra frames on the stack (due to
// open-coded defer processing)
want := []string{"runtime.Callers", "runtime_test.TestCallersPanic.func1",
"runtime.gopanic", "runtime_test.f3", "runtime_test.f2", "runtime_test.f1",
"runtime_test.TestCallersPanic"}
defer func() {
if r := recover(); r == nil {
t.Fatal("did not panic")
}
pcs := make([]uintptr, 20)
pcs = pcs[:runtime.Callers(0, pcs)]
testCallers(t, pcs, true)
testCallersEqual(t, pcs, want)
}()
f1(true)
}
func TestCallersDoublePanic(t *testing.T) {
// Make sure we don't have any extra frames on the stack (due to
// open-coded defer processing)
want := []string{"runtime.Callers", "runtime_test.TestCallersDoublePanic.func1.1",
"runtime.gopanic", "runtime_test.TestCallersDoublePanic.func1", "runtime.gopanic", "runtime_test.TestCallersDoublePanic"}
defer func() {
defer func() {
pcs := make([]uintptr, 20)
pcs = pcs[:runtime.Callers(0, pcs)]
if recover() == nil {
t.Fatal("did not panic")
}
testCallersEqual(t, pcs, want)
}()
if recover() == nil {
t.Fatal("did not panic")
}
panic(2)
}()
panic(1)
}
// Test that a defer after a successful recovery looks like it is called directly
// from the function with the defers.
func TestCallersAfterRecovery(t *testing.T) {
want := []string{"runtime.Callers", "runtime_test.TestCallersAfterRecovery.func1", "runtime_test.TestCallersAfterRecovery"}
defer func() {
pcs := make([]uintptr, 20)
pcs = pcs[:runtime.Callers(0, pcs)]
testCallersEqual(t, pcs, want)
}()
defer func() {
if recover() == nil {
t.Fatal("did not recover from panic")
}
}()
panic(1)
}
func TestCallersAbortedPanic(t *testing.T) {
want := []string{"runtime.Callers", "runtime_test.TestCallersAbortedPanic.func2", "runtime_test.TestCallersAbortedPanic"}
defer func() {
r := recover()
if r != nil {
t.Fatalf("should be no panic remaining to recover")
}
}()
defer func() {
// panic2 was aborted/replaced by panic1, so when panic2 was
// recovered, there is no remaining panic on the stack.
pcs := make([]uintptr, 20)
pcs = pcs[:runtime.Callers(0, pcs)]
testCallersEqual(t, pcs, want)
}()
defer func() {
r := recover()
if r != "panic2" {
t.Fatalf("got %v, wanted %v", r, "panic2")
}
}()
defer func() {
// panic2 aborts/replaces panic1, because it is a recursive panic
// that is not recovered within the defer function called by
// panic1 panicking sequence
panic("panic2")
}()
panic("panic1")
}
func TestCallersAbortedPanic2(t *testing.T) {
want := []string{"runtime.Callers", "runtime_test.TestCallersAbortedPanic2.func2", "runtime_test.TestCallersAbortedPanic2"}
defer func() {
r := recover()
if r != nil {
t.Fatalf("should be no panic remaining to recover")
}
}()
defer func() {
pcs := make([]uintptr, 20)
pcs = pcs[:runtime.Callers(0, pcs)]
testCallersEqual(t, pcs, want)
}()
func() {
defer func() {
r := recover()
if r != "panic2" {
t.Fatalf("got %v, wanted %v", r, "panic2")
}
}()
func() {
defer func() {
// Again, panic2 aborts/replaces panic1
panic("panic2")
}()
panic("panic1")
}()
}()
}
func TestCallersNilPointerPanic(t *testing.T) {
// Make sure we don't have any extra frames on the stack (due to
// open-coded defer processing)
want := []string{"runtime.Callers", "runtime_test.TestCallersNilPointerPanic.func1",
"runtime.gopanic", "runtime.panicmem", "runtime.sigpanic",
"runtime_test.TestCallersNilPointerPanic"}
defer func() {
if r := recover(); r == nil {
t.Fatal("did not panic")
}
pcs := make([]uintptr, 20)
pcs = pcs[:runtime.Callers(0, pcs)]
testCallersEqual(t, pcs, want)
}()
var p *int
if *p == 3 {
t.Fatal("did not see nil pointer panic")
}
}
func TestCallersDivZeroPanic(t *testing.T) {
// Make sure we don't have any extra frames on the stack (due to
// open-coded defer processing)
want := []string{"runtime.Callers", "runtime_test.TestCallersDivZeroPanic.func1",
"runtime.gopanic", "runtime.panicdivide",
"runtime_test.TestCallersDivZeroPanic"}
defer func() {
if r := recover(); r == nil {
t.Fatal("did not panic")
}
pcs := make([]uintptr, 20)
pcs = pcs[:runtime.Callers(0, pcs)]
testCallersEqual(t, pcs, want)
}()
var n int
if 5/n == 1 {
t.Fatal("did not see divide-by-sizer panic")
}
}
func TestCallersDeferNilFuncPanic(t *testing.T) {
// Make sure we don't have any extra frames on the stack. We cut off the check
// at runtime.sigpanic, because non-open-coded defers (which may be used in
// non-opt or race checker mode) include an extra 'deferreturn' frame (which is
// where the nil pointer deref happens).
state := 1
want := []string{"runtime.Callers", "runtime_test.TestCallersDeferNilFuncPanic.func1",
"runtime.gopanic", "runtime.panicmem", "runtime.sigpanic"}
defer func() {
if r := recover(); r == nil {
t.Fatal("did not panic")
}
pcs := make([]uintptr, 20)
pcs = pcs[:runtime.Callers(0, pcs)]
testCallersEqual(t, pcs, want)
if state == 1 {
t.Fatal("nil defer func panicked at defer time rather than function exit time")
}
}()
var f func()
defer f()
// Use the value of 'state' to make sure nil defer func f causes panic at
// function exit, rather than at the defer statement.
state = 2
}
// Same test, but forcing non-open-coded defer by putting the defer in a loop. See
// issue #36050
func TestCallersDeferNilFuncPanicWithLoop(t *testing.T) {
state := 1
want := []string{"runtime.Callers", "runtime_test.TestCallersDeferNilFuncPanicWithLoop.func1",
"runtime.gopanic", "runtime.panicmem", "runtime.sigpanic", "runtime.deferreturn", "runtime_test.TestCallersDeferNilFuncPanicWithLoop"}
defer func() {
if r := recover(); r == nil {
t.Fatal("did not panic")
}
pcs := make([]uintptr, 20)
pcs = pcs[:runtime.Callers(0, pcs)]
testCallersEqual(t, pcs, want)
if state == 1 {
t.Fatal("nil defer func panicked at defer time rather than function exit time")
}
}()
for i := 0; i < 1; i++ {
var f func()
defer f()
}
// Use the value of 'state' to make sure nil defer func f causes panic at
// function exit, rather than at the defer statement.
state = 2
}
// issue #51988
// Func.Endlineno was lost when instantiating generic functions, leading to incorrect
// stack trace positions.
func TestCallersEndlineno(t *testing.T) {
testNormalEndlineno(t)
testGenericEndlineno[int](t)
}
func testNormalEndlineno(t *testing.T) {
defer testCallerLine(t, callerLine(t, 0)+1)
}
func testGenericEndlineno[_ any](t *testing.T) {
defer testCallerLine(t, callerLine(t, 0)+1)
}
func testCallerLine(t *testing.T, want int) {
if have := callerLine(t, 1); have != want {
t.Errorf("callerLine(1) returned %d, but want %d\n", have, want)
}
}
func callerLine(t *testing.T, skip int) int {
_, _, line, ok := runtime.Caller(skip + 1)
if !ok {
t.Fatalf("runtime.Caller(%d) failed", skip+1)
}
return line
}
func BenchmarkCallers(b *testing.B) {
b.Run("cached", func(b *testing.B) {
// Very pcvalueCache-friendly, no inlining.
callersCached(b, 100)
})
b.Run("inlined", func(b *testing.B) {
// Some inlining, still pretty cache-friendly.
callersInlined(b, 100)
})
b.Run("no-cache", func(b *testing.B) {
// Cache-hostile
callersNoCache(b, 100)
})
}
func callersCached(b *testing.B, n int) int {
if n <= 0 {
pcs := make([]uintptr, 32)
b.ResetTimer()
for i := 0; i < b.N; i++ {
runtime.Callers(0, pcs)
}
b.StopTimer()
return 0
}
return 1 + callersCached(b, n-1)
}
func callersInlined(b *testing.B, n int) int {
if n <= 0 {
pcs := make([]uintptr, 32)
b.ResetTimer()
for i := 0; i < b.N; i++ {
runtime.Callers(0, pcs)
}
b.StopTimer()
return 0
}
return 1 + callersInlined1(b, n-1)
}
func callersInlined1(b *testing.B, n int) int { return callersInlined2(b, n) }
func callersInlined2(b *testing.B, n int) int { return callersInlined3(b, n) }
func callersInlined3(b *testing.B, n int) int { return callersInlined4(b, n) }
func callersInlined4(b *testing.B, n int) int { return callersInlined(b, n) }
func callersNoCache(b *testing.B, n int) int {
if n <= 0 {
pcs := make([]uintptr, 32)
b.ResetTimer()
for i := 0; i < b.N; i++ {
runtime.Callers(0, pcs)
}
b.StopTimer()
return 0
}
switch n % 16 {
case 0:
return 1 + callersNoCache(b, n-1)
case 1:
return 1 + callersNoCache(b, n-1)
case 2:
return 1 + callersNoCache(b, n-1)
case 3:
return 1 + callersNoCache(b, n-1)
case 4:
return 1 + callersNoCache(b, n-1)
case 5:
return 1 + callersNoCache(b, n-1)
case 6:
return 1 + callersNoCache(b, n-1)
case 7:
return 1 + callersNoCache(b, n-1)
case 8:
return 1 + callersNoCache(b, n-1)
case 9:
return 1 + callersNoCache(b, n-1)
case 10:
return 1 + callersNoCache(b, n-1)
case 11:
return 1 + callersNoCache(b, n-1)
case 12:
return 1 + callersNoCache(b, n-1)
case 13:
return 1 + callersNoCache(b, n-1)
case 14:
return 1 + callersNoCache(b, n-1)
default:
return 1 + callersNoCache(b, n-1)
}
}
func BenchmarkFPCallers(b *testing.B) {
b.Run("cached", func(b *testing.B) {
// Very pcvalueCache-friendly, no inlining.
fpCallersCached(b, 100)
})
}
func fpCallersCached(b *testing.B, n int) int {
if n <= 0 {
pcs := make([]uintptr, 32)
b.ResetTimer()
for i := 0; i < b.N; i++ {
runtime.FPCallers(pcs)
}
b.StopTimer()
return 0
}
return 1 + fpCallersCached(b, n-1)
}
func TestFPUnwindAfterRecovery(t *testing.T) {
if !runtime.FramePointerEnabled {
t.Skip("frame pointers not supported for this architecture")
}
// Make sure that frame pointer unwinding succeeds from a deferred
// function run after recovering from a panic. It can fail if the
// recovery does not properly restore the caller's frame pointer before
// running the remaining deferred functions.
//
// This test does not verify the accuracy of the call stack (it
// currently includes a frame from runtime.deferreturn which would
// normally be omitted). It is only intended to check that producing the
// call stack won't crash.
defer func() {
pcs := make([]uintptr, 32)
for i := range pcs {
// If runtime.recovery doesn't properly restore the
// frame pointer before returning control to this
// function, it will point somewhere lower in the stack
// from one of the frames of runtime.gopanic() or one of
// it's callees prior to recovery. So, we put some
// non-zero values on the stack to ensure that frame
// pointer unwinding will crash if it sees the old,
// invalid frame pointer.
pcs[i] = 10
}
runtime.FPCallers(pcs)
t.Logf("%v", pcs)
}()
defer func() {
if recover() == nil {
t.Fatal("did not recover from panic")
}
}()
panic(1)
}

90
src/runtime/cgo.go Normal file
View File

@@ -0,0 +1,90 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package runtime
import "unsafe"
//go:cgo_export_static main
// Filled in by runtime/cgo when linked into binary.
//go:linkname _cgo_init _cgo_init
//go:linkname _cgo_thread_start _cgo_thread_start
//go:linkname _cgo_sys_thread_create _cgo_sys_thread_create
//go:linkname _cgo_notify_runtime_init_done _cgo_notify_runtime_init_done
//go:linkname _cgo_callers _cgo_callers
//go:linkname _cgo_set_context_function _cgo_set_context_function
//go:linkname _cgo_yield _cgo_yield
//go:linkname _cgo_pthread_key_created _cgo_pthread_key_created
//go:linkname _cgo_bindm _cgo_bindm
//go:linkname _cgo_getstackbound _cgo_getstackbound
var (
_cgo_init unsafe.Pointer
_cgo_thread_start unsafe.Pointer
_cgo_sys_thread_create unsafe.Pointer
_cgo_notify_runtime_init_done unsafe.Pointer
_cgo_callers unsafe.Pointer
_cgo_set_context_function unsafe.Pointer
_cgo_yield unsafe.Pointer
_cgo_pthread_key_created unsafe.Pointer
_cgo_bindm unsafe.Pointer
_cgo_getstackbound unsafe.Pointer
)
// iscgo is set to true by the runtime/cgo package
//
// iscgo should be an internal detail,
// but widely used packages access it using linkname.
// Notable members of the hall of shame include:
// - github.com/ebitengine/purego
//
// Do not remove or change the type signature.
// See go.dev/issue/67401.
//
//go:linkname iscgo
var iscgo bool
// set_crosscall2 is set by the runtime/cgo package
// set_crosscall2 should be an internal detail,
// but widely used packages access it using linkname.
// Notable members of the hall of shame include:
// - github.com/ebitengine/purego
//
// Do not remove or change the type signature.
// See go.dev/issue/67401.
//
//go:linkname set_crosscall2
var set_crosscall2 func()
// cgoHasExtraM is set on startup when an extra M is created for cgo.
// The extra M must be created before any C/C++ code calls cgocallback.
var cgoHasExtraM bool
// cgoUse is called by cgo-generated code (using go:linkname to get at
// an unexported name). The calls serve two purposes:
// 1) they are opaque to escape analysis, so the argument is considered to
// escape to the heap.
// 2) they keep the argument alive until the call site; the call is emitted after
// the end of the (presumed) use of the argument by C.
// cgoUse should not actually be called (see cgoAlwaysFalse).
func cgoUse(any) { throw("cgoUse should not be called") }
// cgoAlwaysFalse is a boolean value that is always false.
// The cgo-generated code says if cgoAlwaysFalse { cgoUse(p) }.
// The compiler cannot see that cgoAlwaysFalse is always false,
// so it emits the test and keeps the call, giving the desired
// escape analysis result. The test is cheaper than the call.
var cgoAlwaysFalse bool
var cgo_yield = &_cgo_yield
func cgoNoCallback(v bool) {
g := getg()
if g.nocgocallback && v {
panic("runtime: unexpected setting cgoNoCallback")
}
g.nocgocallback = v
}

View File

@@ -0,0 +1,99 @@
// Copyright 2021 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Macros for transitioning from the host ABI to Go ABI0.
//
// These save the frame pointer, so in general, functions that use
// these should have zero frame size to suppress the automatic frame
// pointer, though it's harmless to not do this.
#ifdef GOOS_windows
// REGS_HOST_TO_ABI0_STACK is the stack bytes used by
// PUSH_REGS_HOST_TO_ABI0.
#define REGS_HOST_TO_ABI0_STACK (28*8 + 8)
// PUSH_REGS_HOST_TO_ABI0 prepares for transitioning from
// the host ABI to Go ABI0 code. It saves all registers that are
// callee-save in the host ABI and caller-save in Go ABI0 and prepares
// for entry to Go.
//
// Save DI SI BP BX R12 R13 R14 R15 X6-X15 registers and the DF flag.
// Clear the DF flag for the Go ABI.
// MXCSR matches the Go ABI, so we don't have to set that,
// and Go doesn't modify it, so we don't have to save it.
#define PUSH_REGS_HOST_TO_ABI0() \
PUSHFQ \
CLD \
ADJSP $(REGS_HOST_TO_ABI0_STACK - 8) \
MOVQ DI, (0*0)(SP) \
MOVQ SI, (1*8)(SP) \
MOVQ BP, (2*8)(SP) \
MOVQ BX, (3*8)(SP) \
MOVQ R12, (4*8)(SP) \
MOVQ R13, (5*8)(SP) \
MOVQ R14, (6*8)(SP) \
MOVQ R15, (7*8)(SP) \
MOVUPS X6, (8*8)(SP) \
MOVUPS X7, (10*8)(SP) \
MOVUPS X8, (12*8)(SP) \
MOVUPS X9, (14*8)(SP) \
MOVUPS X10, (16*8)(SP) \
MOVUPS X11, (18*8)(SP) \
MOVUPS X12, (20*8)(SP) \
MOVUPS X13, (22*8)(SP) \
MOVUPS X14, (24*8)(SP) \
MOVUPS X15, (26*8)(SP)
#define POP_REGS_HOST_TO_ABI0() \
MOVQ (0*0)(SP), DI \
MOVQ (1*8)(SP), SI \
MOVQ (2*8)(SP), BP \
MOVQ (3*8)(SP), BX \
MOVQ (4*8)(SP), R12 \
MOVQ (5*8)(SP), R13 \
MOVQ (6*8)(SP), R14 \
MOVQ (7*8)(SP), R15 \
MOVUPS (8*8)(SP), X6 \
MOVUPS (10*8)(SP), X7 \
MOVUPS (12*8)(SP), X8 \
MOVUPS (14*8)(SP), X9 \
MOVUPS (16*8)(SP), X10 \
MOVUPS (18*8)(SP), X11 \
MOVUPS (20*8)(SP), X12 \
MOVUPS (22*8)(SP), X13 \
MOVUPS (24*8)(SP), X14 \
MOVUPS (26*8)(SP), X15 \
ADJSP $-(REGS_HOST_TO_ABI0_STACK - 8) \
POPFQ
#else
// SysV ABI
#define REGS_HOST_TO_ABI0_STACK (6*8)
// SysV MXCSR matches the Go ABI, so we don't have to set that,
// and Go doesn't modify it, so we don't have to save it.
// Both SysV and Go require DF to be cleared, so that's already clear.
// The SysV and Go frame pointer conventions are compatible.
#define PUSH_REGS_HOST_TO_ABI0() \
ADJSP $(REGS_HOST_TO_ABI0_STACK) \
MOVQ BP, (5*8)(SP) \
LEAQ (5*8)(SP), BP \
MOVQ BX, (0*8)(SP) \
MOVQ R12, (1*8)(SP) \
MOVQ R13, (2*8)(SP) \
MOVQ R14, (3*8)(SP) \
MOVQ R15, (4*8)(SP)
#define POP_REGS_HOST_TO_ABI0() \
MOVQ (0*8)(SP), BX \
MOVQ (1*8)(SP), R12 \
MOVQ (2*8)(SP), R13 \
MOVQ (3*8)(SP), R14 \
MOVQ (4*8)(SP), R15 \
MOVQ (5*8)(SP), BP \
ADJSP $-(REGS_HOST_TO_ABI0_STACK)
#endif

View File

@@ -0,0 +1,43 @@
// Copyright 2021 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Macros for transitioning from the host ABI to Go ABI0.
//
// These macros save and restore the callee-saved registers
// from the stack, but they don't adjust stack pointer, so
// the user should prepare stack space in advance.
// SAVE_R19_TO_R28(offset) saves R19 ~ R28 to the stack space
// of ((offset)+0*8)(RSP) ~ ((offset)+9*8)(RSP).
//
// SAVE_F8_TO_F15(offset) saves F8 ~ F15 to the stack space
// of ((offset)+0*8)(RSP) ~ ((offset)+7*8)(RSP).
//
// R29 is not saved because Go will save and restore it.
#define SAVE_R19_TO_R28(offset) \
STP (R19, R20), ((offset)+0*8)(RSP) \
STP (R21, R22), ((offset)+2*8)(RSP) \
STP (R23, R24), ((offset)+4*8)(RSP) \
STP (R25, R26), ((offset)+6*8)(RSP) \
STP (R27, g), ((offset)+8*8)(RSP)
#define RESTORE_R19_TO_R28(offset) \
LDP ((offset)+0*8)(RSP), (R19, R20) \
LDP ((offset)+2*8)(RSP), (R21, R22) \
LDP ((offset)+4*8)(RSP), (R23, R24) \
LDP ((offset)+6*8)(RSP), (R25, R26) \
LDP ((offset)+8*8)(RSP), (R27, g) /* R28 */
#define SAVE_F8_TO_F15(offset) \
FSTPD (F8, F9), ((offset)+0*8)(RSP) \
FSTPD (F10, F11), ((offset)+2*8)(RSP) \
FSTPD (F12, F13), ((offset)+4*8)(RSP) \
FSTPD (F14, F15), ((offset)+6*8)(RSP)
#define RESTORE_F8_TO_F15(offset) \
FLDPD ((offset)+0*8)(RSP), (F8, F9) \
FLDPD ((offset)+2*8)(RSP), (F10, F11) \
FLDPD ((offset)+4*8)(RSP), (F12, F13) \
FLDPD ((offset)+6*8)(RSP), (F14, F15)

View File

@@ -0,0 +1,60 @@
// Copyright 2022 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Macros for transitioning from the host ABI to Go ABI0.
//
// These macros save and restore the callee-saved registers
// from the stack, but they don't adjust stack pointer, so
// the user should prepare stack space in advance.
// SAVE_R22_TO_R31(offset) saves R22 ~ R31 to the stack space
// of ((offset)+0*8)(R3) ~ ((offset)+9*8)(R3).
//
// SAVE_F24_TO_F31(offset) saves F24 ~ F31 to the stack space
// of ((offset)+0*8)(R3) ~ ((offset)+7*8)(R3).
//
// Note: g is R22
#define SAVE_R22_TO_R31(offset) \
MOVV g, ((offset)+(0*8))(R3) \
MOVV R23, ((offset)+(1*8))(R3) \
MOVV R24, ((offset)+(2*8))(R3) \
MOVV R25, ((offset)+(3*8))(R3) \
MOVV R26, ((offset)+(4*8))(R3) \
MOVV R27, ((offset)+(5*8))(R3) \
MOVV R28, ((offset)+(6*8))(R3) \
MOVV R29, ((offset)+(7*8))(R3) \
MOVV R30, ((offset)+(8*8))(R3) \
MOVV R31, ((offset)+(9*8))(R3)
#define SAVE_F24_TO_F31(offset) \
MOVD F24, ((offset)+(0*8))(R3) \
MOVD F25, ((offset)+(1*8))(R3) \
MOVD F26, ((offset)+(2*8))(R3) \
MOVD F27, ((offset)+(3*8))(R3) \
MOVD F28, ((offset)+(4*8))(R3) \
MOVD F29, ((offset)+(5*8))(R3) \
MOVD F30, ((offset)+(6*8))(R3) \
MOVD F31, ((offset)+(7*8))(R3)
#define RESTORE_R22_TO_R31(offset) \
MOVV ((offset)+(0*8))(R3), g \
MOVV ((offset)+(1*8))(R3), R23 \
MOVV ((offset)+(2*8))(R3), R24 \
MOVV ((offset)+(3*8))(R3), R25 \
MOVV ((offset)+(4*8))(R3), R26 \
MOVV ((offset)+(5*8))(R3), R27 \
MOVV ((offset)+(6*8))(R3), R28 \
MOVV ((offset)+(7*8))(R3), R29 \
MOVV ((offset)+(8*8))(R3), R30 \
MOVV ((offset)+(9*8))(R3), R31
#define RESTORE_F24_TO_F31(offset) \
MOVD ((offset)+(0*8))(R3), F24 \
MOVD ((offset)+(1*8))(R3), F25 \
MOVD ((offset)+(2*8))(R3), F26 \
MOVD ((offset)+(3*8))(R3), F27 \
MOVD ((offset)+(4*8))(R3), F28 \
MOVD ((offset)+(5*8))(R3), F29 \
MOVD ((offset)+(6*8))(R3), F30 \
MOVD ((offset)+(7*8))(R3), F31

View File

@@ -0,0 +1,195 @@
// Copyright 2023 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Macros for transitioning from the host ABI to Go ABI
//
// On PPC64/ELFv2 targets, the following registers are callee
// saved when called from C. They must be preserved before
// calling into Go which does not preserve any of them.
//
// R14-R31
// CR2-4
// VR20-31
// F14-F31
//
// xcoff(aix) and ELFv1 are similar, but may only require a
// subset of these.
//
// These macros assume a 16 byte aligned stack pointer. This
// is required by ELFv1, ELFv2, and AIX PPC64.
#define SAVE_GPR_SIZE (18*8)
#define SAVE_GPR(offset) \
MOVD R14, (offset+8*0)(R1) \
MOVD R15, (offset+8*1)(R1) \
MOVD R16, (offset+8*2)(R1) \
MOVD R17, (offset+8*3)(R1) \
MOVD R18, (offset+8*4)(R1) \
MOVD R19, (offset+8*5)(R1) \
MOVD R20, (offset+8*6)(R1) \
MOVD R21, (offset+8*7)(R1) \
MOVD R22, (offset+8*8)(R1) \
MOVD R23, (offset+8*9)(R1) \
MOVD R24, (offset+8*10)(R1) \
MOVD R25, (offset+8*11)(R1) \
MOVD R26, (offset+8*12)(R1) \
MOVD R27, (offset+8*13)(R1) \
MOVD R28, (offset+8*14)(R1) \
MOVD R29, (offset+8*15)(R1) \
MOVD g, (offset+8*16)(R1) \
MOVD R31, (offset+8*17)(R1)
#define RESTORE_GPR(offset) \
MOVD (offset+8*0)(R1), R14 \
MOVD (offset+8*1)(R1), R15 \
MOVD (offset+8*2)(R1), R16 \
MOVD (offset+8*3)(R1), R17 \
MOVD (offset+8*4)(R1), R18 \
MOVD (offset+8*5)(R1), R19 \
MOVD (offset+8*6)(R1), R20 \
MOVD (offset+8*7)(R1), R21 \
MOVD (offset+8*8)(R1), R22 \
MOVD (offset+8*9)(R1), R23 \
MOVD (offset+8*10)(R1), R24 \
MOVD (offset+8*11)(R1), R25 \
MOVD (offset+8*12)(R1), R26 \
MOVD (offset+8*13)(R1), R27 \
MOVD (offset+8*14)(R1), R28 \
MOVD (offset+8*15)(R1), R29 \
MOVD (offset+8*16)(R1), g \
MOVD (offset+8*17)(R1), R31
#define SAVE_FPR_SIZE (18*8)
#define SAVE_FPR(offset) \
FMOVD F14, (offset+8*0)(R1) \
FMOVD F15, (offset+8*1)(R1) \
FMOVD F16, (offset+8*2)(R1) \
FMOVD F17, (offset+8*3)(R1) \
FMOVD F18, (offset+8*4)(R1) \
FMOVD F19, (offset+8*5)(R1) \
FMOVD F20, (offset+8*6)(R1) \
FMOVD F21, (offset+8*7)(R1) \
FMOVD F22, (offset+8*8)(R1) \
FMOVD F23, (offset+8*9)(R1) \
FMOVD F24, (offset+8*10)(R1) \
FMOVD F25, (offset+8*11)(R1) \
FMOVD F26, (offset+8*12)(R1) \
FMOVD F27, (offset+8*13)(R1) \
FMOVD F28, (offset+8*14)(R1) \
FMOVD F29, (offset+8*15)(R1) \
FMOVD F30, (offset+8*16)(R1) \
FMOVD F31, (offset+8*17)(R1)
#define RESTORE_FPR(offset) \
FMOVD (offset+8*0)(R1), F14 \
FMOVD (offset+8*1)(R1), F15 \
FMOVD (offset+8*2)(R1), F16 \
FMOVD (offset+8*3)(R1), F17 \
FMOVD (offset+8*4)(R1), F18 \
FMOVD (offset+8*5)(R1), F19 \
FMOVD (offset+8*6)(R1), F20 \
FMOVD (offset+8*7)(R1), F21 \
FMOVD (offset+8*8)(R1), F22 \
FMOVD (offset+8*9)(R1), F23 \
FMOVD (offset+8*10)(R1), F24 \
FMOVD (offset+8*11)(R1), F25 \
FMOVD (offset+8*12)(R1), F26 \
FMOVD (offset+8*13)(R1), F27 \
FMOVD (offset+8*14)(R1), F28 \
FMOVD (offset+8*15)(R1), F29 \
FMOVD (offset+8*16)(R1), F30 \
FMOVD (offset+8*17)(R1), F31
// Save and restore VR20-31 (aka VSR56-63). These
// macros must point to a 16B aligned offset.
#define SAVE_VR_SIZE (12*16)
#define SAVE_VR(offset, rtmp) \
MOVD $(offset+16*0), rtmp \
STVX V20, (rtmp)(R1) \
MOVD $(offset+16*1), rtmp \
STVX V21, (rtmp)(R1) \
MOVD $(offset+16*2), rtmp \
STVX V22, (rtmp)(R1) \
MOVD $(offset+16*3), rtmp \
STVX V23, (rtmp)(R1) \
MOVD $(offset+16*4), rtmp \
STVX V24, (rtmp)(R1) \
MOVD $(offset+16*5), rtmp \
STVX V25, (rtmp)(R1) \
MOVD $(offset+16*6), rtmp \
STVX V26, (rtmp)(R1) \
MOVD $(offset+16*7), rtmp \
STVX V27, (rtmp)(R1) \
MOVD $(offset+16*8), rtmp \
STVX V28, (rtmp)(R1) \
MOVD $(offset+16*9), rtmp \
STVX V29, (rtmp)(R1) \
MOVD $(offset+16*10), rtmp \
STVX V30, (rtmp)(R1) \
MOVD $(offset+16*11), rtmp \
STVX V31, (rtmp)(R1)
#define RESTORE_VR(offset, rtmp) \
MOVD $(offset+16*0), rtmp \
LVX (rtmp)(R1), V20 \
MOVD $(offset+16*1), rtmp \
LVX (rtmp)(R1), V21 \
MOVD $(offset+16*2), rtmp \
LVX (rtmp)(R1), V22 \
MOVD $(offset+16*3), rtmp \
LVX (rtmp)(R1), V23 \
MOVD $(offset+16*4), rtmp \
LVX (rtmp)(R1), V24 \
MOVD $(offset+16*5), rtmp \
LVX (rtmp)(R1), V25 \
MOVD $(offset+16*6), rtmp \
LVX (rtmp)(R1), V26 \
MOVD $(offset+16*7), rtmp \
LVX (rtmp)(R1), V27 \
MOVD $(offset+16*8), rtmp \
LVX (rtmp)(R1), V28 \
MOVD $(offset+16*9), rtmp \
LVX (rtmp)(R1), V29 \
MOVD $(offset+16*10), rtmp \
LVX (rtmp)(R1), V30 \
MOVD $(offset+16*11), rtmp \
LVX (rtmp)(R1), V31
// LR and CR are saved in the caller's frame. The callee must
// make space for all other callee-save registers.
#define SAVE_ALL_REG_SIZE (SAVE_GPR_SIZE+SAVE_FPR_SIZE+SAVE_VR_SIZE)
// Stack a frame and save all callee-save registers following the
// host OS's ABI. Fortunately, this is identical for AIX, ELFv1, and
// ELFv2. All host ABIs require the stack pointer to maintain 16 byte
// alignment, and save the callee-save registers in the same places.
//
// To restate, R1 is assumed to be aligned when this macro is used.
// This assumes the caller's frame is compliant with the host ABI.
// CR and LR are saved into the caller's frame per the host ABI.
// R0 is initialized to $0 as expected by Go.
#define STACK_AND_SAVE_HOST_TO_GO_ABI(extra) \
MOVD LR, R0 \
MOVD R0, 16(R1) \
MOVW CR, R0 \
MOVD R0, 8(R1) \
MOVDU R1, -(extra)-FIXED_FRAME-SAVE_ALL_REG_SIZE(R1) \
SAVE_GPR(extra+FIXED_FRAME) \
SAVE_FPR(extra+FIXED_FRAME+SAVE_GPR_SIZE) \
SAVE_VR(extra+FIXED_FRAME+SAVE_GPR_SIZE+SAVE_FPR_SIZE, R0) \
MOVD $0, R0
// This unstacks the frame, restoring all callee-save registers
// as saved by STACK_AND_SAVE_HOST_TO_GO_ABI.
//
// R0 is not guaranteed to contain $0 after this macro.
#define UNSTACK_AND_RESTORE_GO_TO_HOST_ABI(extra) \
RESTORE_GPR(extra+FIXED_FRAME) \
RESTORE_FPR(extra+FIXED_FRAME+SAVE_GPR_SIZE) \
RESTORE_VR(extra+FIXED_FRAME+SAVE_GPR_SIZE+SAVE_FPR_SIZE, R0) \
ADD $(extra+FIXED_FRAME+SAVE_ALL_REG_SIZE), R1 \
MOVD 16(R1), R0 \
MOVD R0, LR \
MOVD 8(R1), R0 \
MOVW R0, CR

42
src/runtime/cgo/asm_386.s Normal file
View File

@@ -0,0 +1,42 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include "textflag.h"
// Set the x_crosscall2_ptr C function pointer variable point to crosscall2.
// It's such a pointer chain: _crosscall2_ptr -> x_crosscall2_ptr -> crosscall2
// Use a local trampoline, to avoid taking the address of a dynamically exported
// function.
TEXT ·set_crosscall2(SB),NOSPLIT,$0-0
MOVL _crosscall2_ptr(SB), AX
MOVL $crosscall2_trampoline<>(SB), BX
MOVL BX, (AX)
RET
TEXT crosscall2_trampoline<>(SB),NOSPLIT,$0-0
JMP crosscall2(SB)
// Called by C code generated by cmd/cgo.
// func crosscall2(fn, a unsafe.Pointer, n int32, ctxt uintptr)
// Saves C callee-saved registers and calls cgocallback with three arguments.
// fn is the PC of a func(a unsafe.Pointer) function.
TEXT crosscall2(SB),NOSPLIT,$28-16
MOVL BP, 24(SP)
MOVL BX, 20(SP)
MOVL SI, 16(SP)
MOVL DI, 12(SP)
MOVL ctxt+12(FP), AX
MOVL AX, 8(SP)
MOVL a+4(FP), AX
MOVL AX, 4(SP)
MOVL fn+0(FP), AX
MOVL AX, 0(SP)
CALL runtime·cgocallback(SB)
MOVL 12(SP), DI
MOVL 16(SP), SI
MOVL 20(SP), BX
MOVL 24(SP), BP
RET

View File

@@ -0,0 +1,47 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include "textflag.h"
#include "abi_amd64.h"
// Set the x_crosscall2_ptr C function pointer variable point to crosscall2.
// It's such a pointer chain: _crosscall2_ptr -> x_crosscall2_ptr -> crosscall2
// Use a local trampoline, to avoid taking the address of a dynamically exported
// function.
TEXT ·set_crosscall2(SB),NOSPLIT,$0-0
MOVQ _crosscall2_ptr(SB), AX
MOVQ $crosscall2_trampoline<>(SB), BX
MOVQ BX, (AX)
RET
TEXT crosscall2_trampoline<>(SB),NOSPLIT,$0-0
JMP crosscall2(SB)
// Called by C code generated by cmd/cgo.
// func crosscall2(fn, a unsafe.Pointer, n int32, ctxt uintptr)
// Saves C callee-saved registers and calls cgocallback with three arguments.
// fn is the PC of a func(a unsafe.Pointer) function.
// This signature is known to SWIG, so we can't change it.
TEXT crosscall2(SB),NOSPLIT,$0-0
PUSH_REGS_HOST_TO_ABI0()
// Make room for arguments to cgocallback.
ADJSP $0x18
#ifndef GOOS_windows
MOVQ DI, 0x0(SP) /* fn */
MOVQ SI, 0x8(SP) /* arg */
// Skip n in DX.
MOVQ CX, 0x10(SP) /* ctxt */
#else
MOVQ CX, 0x0(SP) /* fn */
MOVQ DX, 0x8(SP) /* arg */
// Skip n in R8.
MOVQ R9, 0x10(SP) /* ctxt */
#endif
CALL runtime·cgocallback(SB)
ADJSP $-0x18
POP_REGS_HOST_TO_ABI0()
RET

69
src/runtime/cgo/asm_arm.s Normal file
View File

@@ -0,0 +1,69 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include "textflag.h"
// Set the x_crosscall2_ptr C function pointer variable point to crosscall2.
// It's such a pointer chain: _crosscall2_ptr -> x_crosscall2_ptr -> crosscall2
// Use a local trampoline, to avoid taking the address of a dynamically exported
// function.
TEXT ·set_crosscall2(SB),NOSPLIT,$0-0
MOVW _crosscall2_ptr(SB), R1
MOVW $crosscall2_trampoline<>(SB), R2
MOVW R2, (R1)
RET
TEXT crosscall2_trampoline<>(SB),NOSPLIT,$0-0
JMP crosscall2(SB)
// Called by C code generated by cmd/cgo.
// func crosscall2(fn, a unsafe.Pointer, n int32, ctxt uintptr)
// Saves C callee-saved registers and calls cgocallback with three arguments.
// fn is the PC of a func(a unsafe.Pointer) function.
TEXT crosscall2(SB),NOSPLIT|NOFRAME,$0
SUB $(8*9), R13 // Reserve space for the floating point registers.
// The C arguments arrive in R0, R1, R2, and R3. We want to
// pass R0, R1, and R3 to Go, so we push those on the stack.
// Also, save C callee-save registers R4-R12.
MOVM.WP [R0, R1, R3, R4, R5, R6, R7, R8, R9, g, R11, R12], (R13)
// Finally, save the link register R14. This also puts the
// arguments we pushed for cgocallback where they need to be,
// starting at 4(R13).
MOVW.W R14, -4(R13)
// Skip floating point registers if goarmsoftfp!=0.
MOVB runtime·goarmsoftfp(SB), R11
CMP $0, R11
BNE skipfpsave
MOVD F8, (13*4+8*1)(R13)
MOVD F9, (13*4+8*2)(R13)
MOVD F10, (13*4+8*3)(R13)
MOVD F11, (13*4+8*4)(R13)
MOVD F12, (13*4+8*5)(R13)
MOVD F13, (13*4+8*6)(R13)
MOVD F14, (13*4+8*7)(R13)
MOVD F15, (13*4+8*8)(R13)
skipfpsave:
BL runtime·load_g(SB)
// We set up the arguments to cgocallback when saving registers above.
BL runtime·cgocallback(SB)
MOVB runtime·goarmsoftfp(SB), R11
CMP $0, R11
BNE skipfprest
MOVD (13*4+8*1)(R13), F8
MOVD (13*4+8*2)(R13), F9
MOVD (13*4+8*3)(R13), F10
MOVD (13*4+8*4)(R13), F11
MOVD (13*4+8*5)(R13), F12
MOVD (13*4+8*6)(R13), F13
MOVD (13*4+8*7)(R13), F14
MOVD (13*4+8*8)(R13), F15
skipfprest:
MOVW.P 4(R13), R14
MOVM.IAW (R13), [R0, R1, R3, R4, R5, R6, R7, R8, R9, g, R11, R12]
ADD $(8*9), R13
MOVW R14, R15

View File

@@ -0,0 +1,50 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include "textflag.h"
#include "abi_arm64.h"
// Set the x_crosscall2_ptr C function pointer variable point to crosscall2.
// It's such a pointer chain: _crosscall2_ptr -> x_crosscall2_ptr -> crosscall2
// Use a local trampoline, to avoid taking the address of a dynamically exported
// function.
TEXT ·set_crosscall2(SB),NOSPLIT,$0-0
MOVD _crosscall2_ptr(SB), R1
MOVD $crosscall2_trampoline<>(SB), R2
MOVD R2, (R1)
RET
TEXT crosscall2_trampoline<>(SB),NOSPLIT,$0-0
JMP crosscall2(SB)
// Called by C code generated by cmd/cgo.
// func crosscall2(fn, a unsafe.Pointer, n int32, ctxt uintptr)
// Saves C callee-saved registers and calls cgocallback with three arguments.
// fn is the PC of a func(a unsafe.Pointer) function.
TEXT crosscall2(SB),NOSPLIT|NOFRAME,$0
/*
* We still need to save all callee save register as before, and then
* push 3 args for fn (R0, R1, R3), skipping R2.
* Also note that at procedure entry in gc world, 8(RSP) will be the
* first arg.
*/
SUB $(8*24), RSP
STP (R0, R1), (8*1)(RSP)
MOVD R3, (8*3)(RSP)
SAVE_R19_TO_R28(8*4)
SAVE_F8_TO_F15(8*14)
STP (R29, R30), (8*22)(RSP)
// Initialize Go ABI environment
BL runtime·load_g(SB)
BL runtime·cgocallback(SB)
RESTORE_R19_TO_R28(8*4)
RESTORE_F8_TO_F15(8*14)
LDP (8*22)(RSP), (R29, R30)
ADD $(8*24), RSP
RET

View File

@@ -0,0 +1,53 @@
// Copyright 2022 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include "textflag.h"
#include "abi_loong64.h"
// Set the x_crosscall2_ptr C function pointer variable point to crosscall2.
// It's such a pointer chain: _crosscall2_ptr -> x_crosscall2_ptr -> crosscall2
// Use a local trampoline, to avoid taking the address of a dynamically exported
// function.
TEXT ·set_crosscall2(SB),NOSPLIT,$0-0
MOVV _crosscall2_ptr(SB), R5
MOVV $crosscall2_trampoline<>(SB), R6
MOVV R6, (R5)
RET
TEXT crosscall2_trampoline<>(SB),NOSPLIT,$0-0
JMP crosscall2(SB)
// Called by C code generated by cmd/cgo.
// func crosscall2(fn, a unsafe.Pointer, n int32, ctxt uintptr)
// Saves C callee-saved registers and calls cgocallback with three arguments.
// fn is the PC of a func(a unsafe.Pointer) function.
TEXT crosscall2(SB),NOSPLIT|NOFRAME,$0
/*
* We still need to save all callee save register as before, and then
* push 3 args for fn (R4, R5, R7), skipping R6.
* Also note that at procedure entry in gc world, 8(R29) will be the
* first arg.
*/
ADDV $(-23*8), R3
MOVV R4, (1*8)(R3) // fn unsafe.Pointer
MOVV R5, (2*8)(R3) // a unsafe.Pointer
MOVV R7, (3*8)(R3) // ctxt uintptr
SAVE_R22_TO_R31((4*8))
SAVE_F24_TO_F31((14*8))
MOVV R1, (22*8)(R3)
// Initialize Go ABI environment
JAL runtime·load_g(SB)
JAL runtime·cgocallback(SB)
RESTORE_R22_TO_R31((4*8))
RESTORE_F24_TO_F31((14*8))
MOVV (22*8)(R3), R1
ADDV $(23*8), R3
RET

View File

@@ -0,0 +1,95 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build mips64 || mips64le
#include "textflag.h"
// Set the x_crosscall2_ptr C function pointer variable point to crosscall2.
// It's such a pointer chain: _crosscall2_ptr -> x_crosscall2_ptr -> crosscall2
// Use a local trampoline, to avoid taking the address of a dynamically exported
// function.
TEXT ·set_crosscall2(SB),NOSPLIT,$0-0
MOVV _crosscall2_ptr(SB), R5
MOVV $crosscall2_trampoline<>(SB), R6
MOVV R6, (R5)
RET
TEXT crosscall2_trampoline<>(SB),NOSPLIT,$0-0
JMP crosscall2(SB)
// Called by C code generated by cmd/cgo.
// func crosscall2(fn, a unsafe.Pointer, n int32, ctxt uintptr)
// Saves C callee-saved registers and calls cgocallback with three arguments.
// fn is the PC of a func(a unsafe.Pointer) function.
TEXT crosscall2(SB),NOSPLIT|NOFRAME,$0
/*
* We still need to save all callee save register as before, and then
* push 3 args for fn (R4, R5, R7), skipping R6.
* Also note that at procedure entry in gc world, 8(R29) will be the
* first arg.
*/
#ifndef GOMIPS64_softfloat
ADDV $(-8*23), R29
#else
ADDV $(-8*15), R29
#endif
MOVV R4, (8*1)(R29) // fn unsafe.Pointer
MOVV R5, (8*2)(R29) // a unsafe.Pointer
MOVV R7, (8*3)(R29) // ctxt uintptr
MOVV R16, (8*4)(R29)
MOVV R17, (8*5)(R29)
MOVV R18, (8*6)(R29)
MOVV R19, (8*7)(R29)
MOVV R20, (8*8)(R29)
MOVV R21, (8*9)(R29)
MOVV R22, (8*10)(R29)
MOVV R23, (8*11)(R29)
MOVV RSB, (8*12)(R29)
MOVV g, (8*13)(R29)
MOVV R31, (8*14)(R29)
#ifndef GOMIPS64_softfloat
MOVD F24, (8*15)(R29)
MOVD F25, (8*16)(R29)
MOVD F26, (8*17)(R29)
MOVD F27, (8*18)(R29)
MOVD F28, (8*19)(R29)
MOVD F29, (8*20)(R29)
MOVD F30, (8*21)(R29)
MOVD F31, (8*22)(R29)
#endif
// Initialize Go ABI environment
// prepare SB register = PC & 0xffffffff00000000
BGEZAL R0, 1(PC)
SRLV $32, R31, RSB
SLLV $32, RSB
JAL runtime·load_g(SB)
JAL runtime·cgocallback(SB)
MOVV (8*4)(R29), R16
MOVV (8*5)(R29), R17
MOVV (8*6)(R29), R18
MOVV (8*7)(R29), R19
MOVV (8*8)(R29), R20
MOVV (8*9)(R29), R21
MOVV (8*10)(R29), R22
MOVV (8*11)(R29), R23
MOVV (8*12)(R29), RSB
MOVV (8*13)(R29), g
MOVV (8*14)(R29), R31
#ifndef GOMIPS64_softfloat
MOVD (8*15)(R29), F24
MOVD (8*16)(R29), F25
MOVD (8*17)(R29), F26
MOVD (8*18)(R29), F27
MOVD (8*19)(R29), F28
MOVD (8*20)(R29), F29
MOVD (8*21)(R29), F30
MOVD (8*22)(R29), F31
ADDV $(8*23), R29
#else
ADDV $(8*15), R29
#endif
RET

View File

@@ -0,0 +1,88 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build mips || mipsle
#include "textflag.h"
// Set the x_crosscall2_ptr C function pointer variable point to crosscall2.
// It's such a pointer chain: _crosscall2_ptr -> x_crosscall2_ptr -> crosscall2
// Use a local trampoline, to avoid taking the address of a dynamically exported
// function.
TEXT ·set_crosscall2(SB),NOSPLIT,$0-0
MOVW _crosscall2_ptr(SB), R5
MOVW $crosscall2_trampoline<>(SB), R6
MOVW R6, (R5)
RET
TEXT crosscall2_trampoline<>(SB),NOSPLIT,$0-0
JMP crosscall2(SB)
// Called by C code generated by cmd/cgo.
// func crosscall2(fn, a unsafe.Pointer, n int32, ctxt uintptr)
// Saves C callee-saved registers and calls cgocallback with three arguments.
// fn is the PC of a func(a unsafe.Pointer) function.
TEXT crosscall2(SB),NOSPLIT|NOFRAME,$0
/*
* We still need to save all callee save register as before, and then
* push 3 args for fn (R4, R5, R7), skipping R6.
* Also note that at procedure entry in gc world, 4(R29) will be the
* first arg.
*/
// Space for 9 caller-saved GPR + LR + 6 caller-saved FPR.
// O32 ABI allows us to smash 16 bytes argument area of caller frame.
#ifndef GOMIPS_softfloat
SUBU $(4*14+8*6-16), R29
#else
SUBU $(4*14-16), R29 // For soft-float, no FPR.
#endif
MOVW R4, (4*1)(R29) // fn unsafe.Pointer
MOVW R5, (4*2)(R29) // a unsafe.Pointer
MOVW R7, (4*3)(R29) // ctxt uintptr
MOVW R16, (4*4)(R29)
MOVW R17, (4*5)(R29)
MOVW R18, (4*6)(R29)
MOVW R19, (4*7)(R29)
MOVW R20, (4*8)(R29)
MOVW R21, (4*9)(R29)
MOVW R22, (4*10)(R29)
MOVW R23, (4*11)(R29)
MOVW g, (4*12)(R29)
MOVW R31, (4*13)(R29)
#ifndef GOMIPS_softfloat
MOVD F20, (4*14)(R29)
MOVD F22, (4*14+8*1)(R29)
MOVD F24, (4*14+8*2)(R29)
MOVD F26, (4*14+8*3)(R29)
MOVD F28, (4*14+8*4)(R29)
MOVD F30, (4*14+8*5)(R29)
#endif
JAL runtime·load_g(SB)
JAL runtime·cgocallback(SB)
MOVW (4*4)(R29), R16
MOVW (4*5)(R29), R17
MOVW (4*6)(R29), R18
MOVW (4*7)(R29), R19
MOVW (4*8)(R29), R20
MOVW (4*9)(R29), R21
MOVW (4*10)(R29), R22
MOVW (4*11)(R29), R23
MOVW (4*12)(R29), g
MOVW (4*13)(R29), R31
#ifndef GOMIPS_softfloat
MOVD (4*14)(R29), F20
MOVD (4*14+8*1)(R29), F22
MOVD (4*14+8*2)(R29), F24
MOVD (4*14+8*3)(R29), F26
MOVD (4*14+8*4)(R29), F28
MOVD (4*14+8*5)(R29), F30
ADDU $(4*14+8*6-16), R29
#else
ADDU $(4*14-16), R29
#endif
RET

View File

@@ -0,0 +1,70 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build ppc64 || ppc64le
#include "textflag.h"
#include "asm_ppc64x.h"
#include "abi_ppc64x.h"
#ifdef GO_PPC64X_HAS_FUNCDESC
// crosscall2 is marked with go:cgo_export_static. On AIX, this creates and exports
// the symbol name and descriptor as the AIX linker expects, but does not work if
// referenced from within Go. Create and use an aliased descriptor of crosscall2
// to workaround this.
DEFINE_PPC64X_FUNCDESC(_crosscall2<>, crosscall2)
#define CROSSCALL2_FPTR $_crosscall2<>(SB)
#else
// Use a local trampoline, to avoid taking the address of a dynamically exported
// function.
#define CROSSCALL2_FPTR $crosscall2_trampoline<>(SB)
#endif
// Set the x_crosscall2_ptr C function pointer variable point to crosscall2.
// It's such a pointer chain: _crosscall2_ptr -> x_crosscall2_ptr -> crosscall2
TEXT ·set_crosscall2(SB),NOSPLIT,$0-0
MOVD _crosscall2_ptr(SB), R5
MOVD CROSSCALL2_FPTR, R6
MOVD R6, (R5)
RET
TEXT crosscall2_trampoline<>(SB),NOSPLIT,$0-0
JMP crosscall2(SB)
// Called by C code generated by cmd/cgo.
// func crosscall2(fn, a unsafe.Pointer, n int32, ctxt uintptr)
// Saves C callee-saved registers and calls cgocallback with three arguments.
// fn is the PC of a func(a unsafe.Pointer) function.
// The value of R2 is saved on the new stack frame, and not
// the caller's frame due to issue #43228.
TEXT crosscall2(SB),NOSPLIT|NOFRAME,$0
// Start with standard C stack frame layout and linkage, allocate
// 32 bytes of argument space, save callee-save regs, and set R0 to $0.
STACK_AND_SAVE_HOST_TO_GO_ABI(32)
// The above will not preserve R2 (TOC). Save it in case Go is
// compiled without a TOC pointer (e.g -buildmode=default).
MOVD R2, 24(R1)
// Load the current g.
BL runtime·load_g(SB)
#ifdef GO_PPC64X_HAS_FUNCDESC
// Load the real entry address from the first slot of the function descriptor.
// The first argument fn might be null, that means dropm in pthread key destructor.
CMP R3, $0
BEQ nil_fn
MOVD 8(R3), R2
MOVD (R3), R3
nil_fn:
#endif
MOVD R3, FIXED_FRAME+0(R1) // fn unsafe.Pointer
MOVD R4, FIXED_FRAME+8(R1) // a unsafe.Pointer
// Skip R5 = n uint32
MOVD R6, FIXED_FRAME+16(R1) // ctxt uintptr
BL runtime·cgocallback(SB)
// Restore the old frame, and R2.
MOVD 24(R1), R2
UNSTACK_AND_RESTORE_GO_TO_HOST_ABI(32)
RET

View File

@@ -0,0 +1,91 @@
// Copyright 2020 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include "textflag.h"
// Set the x_crosscall2_ptr C function pointer variable point to crosscall2.
// It's such a pointer chain: _crosscall2_ptr -> x_crosscall2_ptr -> crosscall2
// Use a local trampoline, to avoid taking the address of a dynamically exported
// function.
TEXT ·set_crosscall2(SB),NOSPLIT,$0-0
MOV _crosscall2_ptr(SB), X7
MOV $crosscall2_trampoline<>(SB), X8
MOV X8, (X7)
RET
TEXT crosscall2_trampoline<>(SB),NOSPLIT,$0-0
JMP crosscall2(SB)
// Called by C code generated by cmd/cgo.
// func crosscall2(fn, a unsafe.Pointer, n int32, ctxt uintptr)
// Saves C callee-saved registers and calls cgocallback with three arguments.
// fn is the PC of a func(a unsafe.Pointer) function.
TEXT crosscall2(SB),NOSPLIT|NOFRAME,$0
/*
* Push arguments for fn (X10, X11, X13), along with all callee-save
* registers. Note that at procedure entry the first argument is at
* 8(X2).
*/
ADD $(-8*29), X2
MOV X10, (8*1)(X2) // fn unsafe.Pointer
MOV X11, (8*2)(X2) // a unsafe.Pointer
MOV X13, (8*3)(X2) // ctxt uintptr
MOV X8, (8*4)(X2)
MOV X9, (8*5)(X2)
MOV X18, (8*6)(X2)
MOV X19, (8*7)(X2)
MOV X20, (8*8)(X2)
MOV X21, (8*9)(X2)
MOV X22, (8*10)(X2)
MOV X23, (8*11)(X2)
MOV X24, (8*12)(X2)
MOV X25, (8*13)(X2)
MOV X26, (8*14)(X2)
MOV g, (8*15)(X2)
MOV X1, (8*16)(X2)
MOVD F8, (8*17)(X2)
MOVD F9, (8*18)(X2)
MOVD F18, (8*19)(X2)
MOVD F19, (8*20)(X2)
MOVD F20, (8*21)(X2)
MOVD F21, (8*22)(X2)
MOVD F22, (8*23)(X2)
MOVD F23, (8*24)(X2)
MOVD F24, (8*25)(X2)
MOVD F25, (8*26)(X2)
MOVD F26, (8*27)(X2)
MOVD F27, (8*28)(X2)
// Initialize Go ABI environment
CALL runtime·load_g(SB)
CALL runtime·cgocallback(SB)
MOV (8*4)(X2), X8
MOV (8*5)(X2), X9
MOV (8*6)(X2), X18
MOV (8*7)(X2), X19
MOV (8*8)(X2), X20
MOV (8*9)(X2), X21
MOV (8*10)(X2), X22
MOV (8*11)(X2), X23
MOV (8*12)(X2), X24
MOV (8*13)(X2), X25
MOV (8*14)(X2), X26
MOV (8*15)(X2), g
MOV (8*16)(X2), X1
MOVD (8*17)(X2), F8
MOVD (8*18)(X2), F9
MOVD (8*19)(X2), F18
MOVD (8*20)(X2), F19
MOVD (8*21)(X2), F20
MOVD (8*22)(X2), F21
MOVD (8*23)(X2), F22
MOVD (8*24)(X2), F23
MOVD (8*25)(X2), F24
MOVD (8*26)(X2), F25
MOVD (8*27)(X2), F26
MOVD (8*28)(X2), F27
ADD $(8*29), X2
RET

View File

@@ -0,0 +1,68 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include "textflag.h"
// Set the x_crosscall2_ptr C function pointer variable point to crosscall2.
// It's such a pointer chain: _crosscall2_ptr -> x_crosscall2_ptr -> crosscall2
// Use a local trampoline, to avoid taking the address of a dynamically exported
// function.
TEXT ·set_crosscall2(SB),NOSPLIT,$0-0
MOVD _crosscall2_ptr(SB), R1
MOVD $crosscall2_trampoline<>(SB), R2
MOVD R2, (R1)
RET
TEXT crosscall2_trampoline<>(SB),NOSPLIT,$0-0
JMP crosscall2(SB)
// Called by C code generated by cmd/cgo.
// func crosscall2(fn, a unsafe.Pointer, n int32, ctxt uintptr)
// Saves C callee-saved registers and calls cgocallback with three arguments.
// fn is the PC of a func(a unsafe.Pointer) function.
TEXT crosscall2(SB),NOSPLIT|NOFRAME,$0
// Start with standard C stack frame layout and linkage.
// Save R6-R15 in the register save area of the calling function.
STMG R6, R15, 48(R15)
// Allocate 96 bytes on the stack.
MOVD $-96(R15), R15
// Save F8-F15 in our stack frame.
FMOVD F8, 32(R15)
FMOVD F9, 40(R15)
FMOVD F10, 48(R15)
FMOVD F11, 56(R15)
FMOVD F12, 64(R15)
FMOVD F13, 72(R15)
FMOVD F14, 80(R15)
FMOVD F15, 88(R15)
// Initialize Go ABI environment.
BL runtime·load_g(SB)
MOVD R2, 8(R15) // fn unsafe.Pointer
MOVD R3, 16(R15) // a unsafe.Pointer
// Skip R4 = n uint32
MOVD R5, 24(R15) // ctxt uintptr
BL runtime·cgocallback(SB)
FMOVD 32(R15), F8
FMOVD 40(R15), F9
FMOVD 48(R15), F10
FMOVD 56(R15), F11
FMOVD 64(R15), F12
FMOVD 72(R15), F13
FMOVD 80(R15), F14
FMOVD 88(R15), F15
// De-allocate stack frame.
MOVD $96(R15), R15
// Restore R6-R15.
LMG 48(R15), R6, R15
RET

View File

@@ -0,0 +1,11 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include "textflag.h"
TEXT ·set_crosscall2(SB),NOSPLIT,$0-0
UNDEF
TEXT crosscall2(SB), NOSPLIT, $0
UNDEF

View File

@@ -0,0 +1,152 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package cgo
import "unsafe"
// These utility functions are available to be called from code
// compiled with gcc via crosscall2.
// The declaration of crosscall2 is:
// void crosscall2(void (*fn)(void *), void *, int);
//
// We need to export the symbol crosscall2 in order to support
// callbacks from shared libraries. This applies regardless of
// linking mode.
//
// Compatibility note: SWIG uses crosscall2 in exactly one situation:
// to call _cgo_panic using the pattern shown below. We need to keep
// that pattern working. In particular, crosscall2 actually takes four
// arguments, but it works to call it with three arguments when
// calling _cgo_panic.
//
//go:cgo_export_static crosscall2
//go:cgo_export_dynamic crosscall2
// Panic. The argument is converted into a Go string.
// Call like this in code compiled with gcc:
// struct { const char *p; } a;
// a.p = /* string to pass to panic */;
// crosscall2(_cgo_panic, &a, sizeof a);
// /* The function call will not return. */
// TODO: We should export a regular C function to panic, change SWIG
// to use that instead of the above pattern, and then we can drop
// backwards-compatibility from crosscall2 and stop exporting it.
//go:linkname _runtime_cgo_panic_internal runtime._cgo_panic_internal
func _runtime_cgo_panic_internal(p *byte)
//go:linkname _cgo_panic _cgo_panic
//go:cgo_export_static _cgo_panic
//go:cgo_export_dynamic _cgo_panic
func _cgo_panic(a *struct{ cstr *byte }) {
_runtime_cgo_panic_internal(a.cstr)
}
//go:cgo_import_static x_cgo_init
//go:linkname x_cgo_init x_cgo_init
//go:linkname _cgo_init _cgo_init
var x_cgo_init byte
var _cgo_init = &x_cgo_init
//go:cgo_import_static x_cgo_thread_start
//go:linkname x_cgo_thread_start x_cgo_thread_start
//go:linkname _cgo_thread_start _cgo_thread_start
var x_cgo_thread_start byte
var _cgo_thread_start = &x_cgo_thread_start
// Creates a new system thread without updating any Go state.
//
// This method is invoked during shared library loading to create a new OS
// thread to perform the runtime initialization. This method is similar to
// _cgo_sys_thread_start except that it doesn't update any Go state.
//go:cgo_import_static x_cgo_sys_thread_create
//go:linkname x_cgo_sys_thread_create x_cgo_sys_thread_create
//go:linkname _cgo_sys_thread_create _cgo_sys_thread_create
var x_cgo_sys_thread_create byte
var _cgo_sys_thread_create = &x_cgo_sys_thread_create
// Indicates whether a dummy thread key has been created or not.
//
// When calling go exported function from C, we register a destructor
// callback, for a dummy thread key, by using pthread_key_create.
//go:cgo_import_static x_cgo_pthread_key_created
//go:linkname x_cgo_pthread_key_created x_cgo_pthread_key_created
//go:linkname _cgo_pthread_key_created _cgo_pthread_key_created
var x_cgo_pthread_key_created byte
var _cgo_pthread_key_created = &x_cgo_pthread_key_created
// Export crosscall2 to a c function pointer variable.
// Used to dropm in pthread key destructor, while C thread is exiting.
//go:cgo_import_static x_crosscall2_ptr
//go:linkname x_crosscall2_ptr x_crosscall2_ptr
//go:linkname _crosscall2_ptr _crosscall2_ptr
var x_crosscall2_ptr byte
var _crosscall2_ptr = &x_crosscall2_ptr
// Set the x_crosscall2_ptr C function pointer variable point to crosscall2.
// It's for the runtime package to call at init time.
func set_crosscall2()
//go:linkname _set_crosscall2 runtime.set_crosscall2
var _set_crosscall2 = set_crosscall2
// Store the g into the thread-specific value.
// So that pthread_key_destructor will dropm when the thread is exiting.
//go:cgo_import_static x_cgo_bindm
//go:linkname x_cgo_bindm x_cgo_bindm
//go:linkname _cgo_bindm _cgo_bindm
var x_cgo_bindm byte
var _cgo_bindm = &x_cgo_bindm
// Notifies that the runtime has been initialized.
//
// We currently block at every CGO entry point (via _cgo_wait_runtime_init_done)
// to ensure that the runtime has been initialized before the CGO call is
// executed. This is necessary for shared libraries where we kickoff runtime
// initialization in a separate thread and return without waiting for this
// thread to complete the init.
//go:cgo_import_static x_cgo_notify_runtime_init_done
//go:linkname x_cgo_notify_runtime_init_done x_cgo_notify_runtime_init_done
//go:linkname _cgo_notify_runtime_init_done _cgo_notify_runtime_init_done
var x_cgo_notify_runtime_init_done byte
var _cgo_notify_runtime_init_done = &x_cgo_notify_runtime_init_done
// Sets the traceback context function. See runtime.SetCgoTraceback.
//go:cgo_import_static x_cgo_set_context_function
//go:linkname x_cgo_set_context_function x_cgo_set_context_function
//go:linkname _cgo_set_context_function _cgo_set_context_function
var x_cgo_set_context_function byte
var _cgo_set_context_function = &x_cgo_set_context_function
// Calls a libc function to execute background work injected via libc
// interceptors, such as processing pending signals under the thread
// sanitizer.
//
// Left as a nil pointer if no libc interceptors are expected.
//go:cgo_import_static _cgo_yield
//go:linkname _cgo_yield _cgo_yield
var _cgo_yield unsafe.Pointer
//go:cgo_export_static _cgo_topofstack
//go:cgo_export_dynamic _cgo_topofstack
// x_cgo_getstackbound gets the thread's C stack size and
// set the G's stack bound based on the stack size.
//go:cgo_import_static x_cgo_getstackbound
//go:linkname x_cgo_getstackbound x_cgo_getstackbound
//go:linkname _cgo_getstackbound _cgo_getstackbound
var x_cgo_getstackbound byte
var _cgo_getstackbound = &x_cgo_getstackbound

View File

@@ -0,0 +1,12 @@
// Copyright 2019 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package cgo
// These functions must be exported in order to perform
// longcall on cgo programs (cf gcc_aix_ppc64.c).
//
//go:cgo_export_static __cgo_topofstack
//go:cgo_export_static runtime.rt0_go
//go:cgo_export_static _rt0_ppc64_aix_lib

View File

@@ -0,0 +1,17 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build darwin || linux
package cgo
import _ "unsafe" // for go:linkname
// Calls the traceback function passed to SetCgoTraceback.
//go:cgo_import_static x_cgo_callers
//go:linkname x_cgo_callers x_cgo_callers
//go:linkname _cgo_callers _cgo_callers
var x_cgo_callers byte
var _cgo_callers = &x_cgo_callers

40
src/runtime/cgo/cgo.go Normal file
View File

@@ -0,0 +1,40 @@
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
/*
Package cgo contains runtime support for code generated
by the cgo tool. See the documentation for the cgo command
for details on using cgo.
*/
package cgo
/*
#cgo darwin,!arm64 LDFLAGS: -lpthread
#cgo darwin,arm64 LDFLAGS: -framework CoreFoundation
#cgo dragonfly LDFLAGS: -lpthread
#cgo freebsd LDFLAGS: -lpthread
#cgo android LDFLAGS: -llog
#cgo !android,linux LDFLAGS: -lpthread
#cgo netbsd LDFLAGS: -lpthread
#cgo openbsd LDFLAGS: -lpthread
#cgo aix LDFLAGS: -Wl,-berok
#cgo solaris LDFLAGS: -lxnet
#cgo solaris LDFLAGS: -lsocket
// Use -fno-stack-protector to avoid problems locating the
// proper support functions. See issues #52919, #54313, #58385.
#cgo CFLAGS: -Wall -Werror -fno-stack-protector
#cgo solaris CPPFLAGS: -D_POSIX_PTHREAD_SEMANTICS
*/
import "C"
import "runtime/internal/sys"
// Incomplete is used specifically for the semantics of incomplete C types.
type Incomplete struct {
_ sys.NotInHeap
}

View File

@@ -0,0 +1,19 @@
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build dragonfly
package cgo
import _ "unsafe" // for go:linkname
// Supply environ and __progname, because we don't
// link against the standard DragonFly crt0.o and the
// libc dynamic library needs them.
//go:linkname _environ environ
//go:linkname _progname __progname
var _environ uintptr
var _progname uintptr

View File

@@ -0,0 +1,22 @@
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build freebsd
package cgo
import _ "unsafe" // for go:linkname
// Supply environ and __progname, because we don't
// link against the standard FreeBSD crt0.o and the
// libc dynamic library needs them.
//go:linkname _environ environ
//go:linkname _progname __progname
//go:cgo_export_dynamic environ
//go:cgo_export_dynamic __progname
var _environ uintptr
var _progname uintptr

48
src/runtime/cgo/gcc_386.S Normal file
View File

@@ -0,0 +1,48 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
.file "gcc_386.S"
/*
* Windows still insists on underscore prefixes for C function names.
*/
#if defined(_WIN32)
#define EXT(s) _##s
#else
#define EXT(s) s
#endif
/*
* void crosscall1(void (*fn)(void), void (*setg_gcc)(void*), void *g)
*
* Calling into the gc tool chain, where all registers are caller save.
* Called from standard x86 ABI, where %ebp, %ebx, %esi,
* and %edi are callee-save, so they must be saved explicitly.
*/
.globl EXT(crosscall1)
EXT(crosscall1):
pushl %ebp
movl %esp, %ebp
pushl %ebx
pushl %esi
pushl %edi
movl 16(%ebp), %eax /* g */
pushl %eax
movl 12(%ebp), %eax /* setg_gcc */
call *%eax
popl %eax
movl 8(%ebp), %eax /* fn */
call *%eax
popl %edi
popl %esi
popl %ebx
popl %ebp
ret
#ifdef __ELF__
.section .note.GNU-stack,"",@progbits
#endif

View File

@@ -0,0 +1,132 @@
// Copyright 2019 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
.file "gcc_aix_ppc64.S"
/*
* void crosscall_ppc64(void (*fn)(void), void *g)
*
* Calling into the gc tool chain, where all registers are caller save.
* Called from standard ppc64 C ABI, where r2, r14-r31, f14-f31 are
* callee-save, so they must be saved explicitly.
* AIX has a special assembly syntax and keywords that can be mixed with
* Linux assembly.
*/
.toc
.csect .text[PR]
.globl crosscall_ppc64
.globl .crosscall_ppc64
.csect crosscall_ppc64[DS]
crosscall_ppc64:
.llong .crosscall_ppc64, TOC[tc0], 0
.csect .text[PR]
.crosscall_ppc64:
// Start with standard C stack frame layout and linkage
mflr 0
std 0, 16(1) // Save LR in caller's frame
std 2, 40(1) // Save TOC in caller's frame
bl saveregs
stdu 1, -296(1)
// Set up Go ABI constant registers
// Must match _cgo_reginit in runtime package.
xor 0, 0, 0
// Restore g pointer (r30 in Go ABI, which may have been clobbered by C)
mr 30, 4
// Call fn
mr 12, 3
mtctr 12
bctrl
addi 1, 1, 296
bl restoreregs
ld 2, 40(1)
ld 0, 16(1)
mtlr 0
blr
saveregs:
// Save callee-save registers
// O=-288; for R in {14..31}; do echo "\tstd\t$R, $O(1)"; ((O+=8)); done; for F in f{14..31}; do echo "\tstfd\t$F, $O(1)"; ((O+=8)); done
std 14, -288(1)
std 15, -280(1)
std 16, -272(1)
std 17, -264(1)
std 18, -256(1)
std 19, -248(1)
std 20, -240(1)
std 21, -232(1)
std 22, -224(1)
std 23, -216(1)
std 24, -208(1)
std 25, -200(1)
std 26, -192(1)
std 27, -184(1)
std 28, -176(1)
std 29, -168(1)
std 30, -160(1)
std 31, -152(1)
stfd 14, -144(1)
stfd 15, -136(1)
stfd 16, -128(1)
stfd 17, -120(1)
stfd 18, -112(1)
stfd 19, -104(1)
stfd 20, -96(1)
stfd 21, -88(1)
stfd 22, -80(1)
stfd 23, -72(1)
stfd 24, -64(1)
stfd 25, -56(1)
stfd 26, -48(1)
stfd 27, -40(1)
stfd 28, -32(1)
stfd 29, -24(1)
stfd 30, -16(1)
stfd 31, -8(1)
blr
restoreregs:
// O=-288; for R in {14..31}; do echo "\tld\t$R, $O(1)"; ((O+=8)); done; for F in {14..31}; do echo "\tlfd\t$F, $O(1)"; ((O+=8)); done
ld 14, -288(1)
ld 15, -280(1)
ld 16, -272(1)
ld 17, -264(1)
ld 18, -256(1)
ld 19, -248(1)
ld 20, -240(1)
ld 21, -232(1)
ld 22, -224(1)
ld 23, -216(1)
ld 24, -208(1)
ld 25, -200(1)
ld 26, -192(1)
ld 27, -184(1)
ld 28, -176(1)
ld 29, -168(1)
ld 30, -160(1)
ld 31, -152(1)
lfd 14, -144(1)
lfd 15, -136(1)
lfd 16, -128(1)
lfd 17, -120(1)
lfd 18, -112(1)
lfd 19, -104(1)
lfd 20, -96(1)
lfd 21, -88(1)
lfd 22, -80(1)
lfd 23, -72(1)
lfd 24, -64(1)
lfd 25, -56(1)
lfd 26, -48(1)
lfd 27, -40(1)
lfd 28, -32(1)
lfd 29, -24(1)
lfd 30, -16(1)
lfd 31, -8(1)
blr

View File

@@ -0,0 +1,35 @@
// Copyright 2019 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
/*
* On AIX, call to _cgo_topofstack and Go main are forced to be a longcall.
* Without it, ld might add trampolines in the middle of .text section
* to reach these functions which are normally declared in runtime package.
*/
extern int __attribute__((longcall)) __cgo_topofstack(void);
extern int __attribute__((longcall)) runtime_rt0_go(int argc, char **argv);
extern void __attribute__((longcall)) _rt0_ppc64_aix_lib(void);
int _cgo_topofstack(void) {
return __cgo_topofstack();
}
int main(int argc, char **argv) {
return runtime_rt0_go(argc, argv);
}
static void libinit(void) __attribute__ ((constructor));
/*
* libinit aims to replace .init_array section which isn't available on aix.
* Using __attribute__ ((constructor)) let gcc handles this instead of
* adding special code in cmd/link.
* However, it will be called for every Go programs which has cgo.
* Inside _rt0_ppc64_aix_lib(), runtime.isarchive is checked in order
* to know if this program is a c-archive or a simple cgo program.
* If it's not set, _rt0_ppc64_ax_lib() returns directly.
*/
static void libinit() {
_rt0_ppc64_aix_lib();
}

View File

@@ -0,0 +1,55 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
.file "gcc_amd64.S"
/*
* Apple still insists on underscore prefixes for C function names.
*/
#if defined(__APPLE__)
#define EXT(s) _##s
#else
#define EXT(s) s
#endif
/*
* void crosscall1(void (*fn)(void), void (*setg_gcc)(void*), void *g)
*
* Calling into the gc tool chain, where all registers are caller save.
* Called from standard x86-64 ABI, where %rbx, %rbp, %r12-%r15
* are callee-save so they must be saved explicitly.
* The standard x86-64 ABI passes the three arguments m, g, fn
* in %rdi, %rsi, %rdx.
*/
.globl EXT(crosscall1)
EXT(crosscall1):
pushq %rbx
pushq %rbp
pushq %r12
pushq %r13
pushq %r14
pushq %r15
#if defined(_WIN64)
movq %r8, %rdi /* arg of setg_gcc */
call *%rdx /* setg_gcc */
call *%rcx /* fn */
#else
movq %rdi, %rbx
movq %rdx, %rdi /* arg of setg_gcc */
call *%rsi /* setg_gcc */
call *%rbx /* fn */
#endif
popq %r15
popq %r14
popq %r13
popq %r12
popq %rbp
popq %rbx
ret
#ifdef __ELF__
.section .note.GNU-stack,"",@progbits
#endif

View File

@@ -0,0 +1,90 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include <stdarg.h>
#include <android/log.h>
#include <pthread.h>
#include <dlfcn.h>
#include "libcgo.h"
void
fatalf(const char* format, ...)
{
va_list ap;
// Write to both stderr and logcat.
//
// When running from an .apk, /dev/stderr and /dev/stdout
// redirect to /dev/null. And when running a test binary
// via adb shell, it's easy to miss logcat.
fprintf(stderr, "runtime/cgo: ");
va_start(ap, format);
vfprintf(stderr, format, ap);
va_end(ap);
fprintf(stderr, "\n");
va_start(ap, format);
__android_log_vprint(ANDROID_LOG_FATAL, "runtime/cgo", format, ap);
va_end(ap);
abort();
}
// Truncated to a different magic value on 32-bit; that's ok.
#define magic1 (0x23581321345589ULL)
// From https://android.googlesource.com/platform/bionic/+/refs/heads/android10-tests-release/libc/private/bionic_asm_tls.h#69.
#define TLS_SLOT_APP 2
// inittls allocates a thread-local storage slot for g.
//
// It finds the first available slot using pthread_key_create and uses
// it as the offset value for runtime.tls_g.
static void
inittls(void **tlsg, void **tlsbase)
{
pthread_key_t k;
int i, err;
void *handle, *get_ver, *off;
// Check for Android Q where we can use the free TLS_SLOT_APP slot.
handle = dlopen("libc.so", RTLD_LAZY);
if (handle == NULL) {
fatalf("inittls: failed to dlopen main program");
return;
}
// android_get_device_api_level is introduced in Android Q, so its mere presence
// is enough.
get_ver = dlsym(handle, "android_get_device_api_level");
dlclose(handle);
if (get_ver != NULL) {
off = (void *)(TLS_SLOT_APP*sizeof(void *));
// tlsg is initialized to Q's free TLS slot. Verify it while we're here.
if (*tlsg != off) {
fatalf("tlsg offset wrong, got %ld want %ld\n", *tlsg, off);
}
return;
}
err = pthread_key_create(&k, nil);
if(err != 0) {
fatalf("pthread_key_create failed: %d", err);
}
pthread_setspecific(k, (void*)magic1);
// If thread local slots are laid out as we expect, our magic word will
// be located at some low offset from tlsbase. However, just in case something went
// wrong, the search is limited to sensible offsets. PTHREAD_KEYS_MAX was the
// original limit, but issue 19472 made a higher limit necessary.
for (i=0; i<384; i++) {
if (*(tlsbase+i) == (void*)magic1) {
*tlsg = (void*)(i*sizeof(void *));
pthread_setspecific(k, 0);
return;
}
}
fatalf("inittls: could not find pthread key");
}
void (*x_cgo_inittls)(void **tlsg, void **tlsbase) = inittls;

31
src/runtime/cgo/gcc_arm.S Normal file
View File

@@ -0,0 +1,31 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
.file "gcc_arm.S"
/*
* void crosscall1(void (*fn)(void), void (*setg_gcc)(void *g), void *g)
*
* Calling into the gc tool chain, where all registers are caller save.
* Called from standard ARM EABI, where r4-r11 are callee-save, so they
* must be saved explicitly.
*/
.globl crosscall1
crosscall1:
push {r4, r5, r6, r7, r8, r9, r10, r11, ip, lr}
mov r4, r0
mov r5, r1
mov r0, r2
// Because the assembler might target an earlier revision of the ISA
// by default, we encode BLX as a .word.
.word 0xe12fff35 // blx r5 // setg(g)
.word 0xe12fff34 // blx r4 // fn()
pop {r4, r5, r6, r7, r8, r9, r10, r11, ip, pc}
#ifdef __ELF__
.section .note.GNU-stack,"",%progbits
#endif

View File

@@ -0,0 +1,84 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
.file "gcc_arm64.S"
/*
* Apple still insists on underscore prefixes for C function names.
*/
#if defined(__APPLE__)
#define EXT(s) _##s
#else
#define EXT(s) s
#endif
// Apple's ld64 wants 4-byte alignment for ARM code sections.
// .align in both Apple as and GNU as treat n as aligning to 2**n bytes.
.align 2
/*
* void crosscall1(void (*fn)(void), void (*setg_gcc)(void *g), void *g)
*
* Calling into the gc tool chain, where all registers are caller save.
* Called from standard ARM EABI, where x19-x29 are callee-save, so they
* must be saved explicitly, along with x30 (LR).
*/
.globl EXT(crosscall1)
EXT(crosscall1):
.cfi_startproc
stp x29, x30, [sp, #-96]!
.cfi_def_cfa_offset 96
.cfi_offset 29, -96
.cfi_offset 30, -88
mov x29, sp
.cfi_def_cfa_register 29
stp x19, x20, [sp, #80]
.cfi_offset 19, -16
.cfi_offset 20, -8
stp x21, x22, [sp, #64]
.cfi_offset 21, -32
.cfi_offset 22, -24
stp x23, x24, [sp, #48]
.cfi_offset 23, -48
.cfi_offset 24, -40
stp x25, x26, [sp, #32]
.cfi_offset 25, -64
.cfi_offset 26, -56
stp x27, x28, [sp, #16]
.cfi_offset 27, -80
.cfi_offset 28, -72
mov x19, x0
mov x20, x1
mov x0, x2
blr x20
blr x19
ldp x27, x28, [sp, #16]
.cfi_restore 27
.cfi_restore 28
ldp x25, x26, [sp, #32]
.cfi_restore 25
.cfi_restore 26
ldp x23, x24, [sp, #48]
.cfi_restore 23
.cfi_restore 24
ldp x21, x22, [sp, #64]
.cfi_restore 21
.cfi_restore 22
ldp x19, x20, [sp, #80]
.cfi_restore 19
.cfi_restore 20
ldp x29, x30, [sp], #96
.cfi_restore 29
.cfi_restore 30
.cfi_def_cfa 31, 0
ret
.cfi_endproc
#ifdef __ELF__
.section .note.GNU-stack,"",%progbits
#endif

View File

@@ -0,0 +1,20 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build unix || windows
#include "libcgo.h"
// Releases the cgo traceback context.
void _cgo_release_context(uintptr_t ctxt) {
void (*pfn)(struct context_arg*);
pfn = _cgo_get_context_function();
if (ctxt != 0 && pfn != nil) {
struct context_arg arg;
arg.Context = ctxt;
(*pfn)(&arg);
}
}

View File

@@ -0,0 +1,60 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include <string.h> /* for strerror */
#include <pthread.h>
#include <signal.h>
#include "libcgo.h"
#include "libcgo_unix.h"
static void* threadentry(void*);
static void (*setg_gcc)(void*);
void
x_cgo_init(G *g, void (*setg)(void*), void **tlsg, void **tlsbase)
{
setg_gcc = setg;
_cgo_set_stacklo(g, NULL);
}
void
_cgo_sys_thread_start(ThreadStart *ts)
{
pthread_attr_t attr;
sigset_t ign, oset;
pthread_t p;
size_t size;
int err;
sigfillset(&ign);
pthread_sigmask(SIG_SETMASK, &ign, &oset);
size = pthread_get_stacksize_np(pthread_self());
pthread_attr_init(&attr);
pthread_attr_setstacksize(&attr, size);
// Leave stacklo=0 and set stackhi=size; mstart will do the rest.
ts->g->stackhi = size;
err = _cgo_try_pthread_create(&p, &attr, threadentry, ts);
pthread_sigmask(SIG_SETMASK, &oset, nil);
if (err != 0) {
fprintf(stderr, "runtime/cgo: pthread_create failed: %s\n", strerror(err));
abort();
}
}
extern void crosscall1(void (*fn)(void), void (*setg_gcc)(void*), void *g);
static void*
threadentry(void *v)
{
ThreadStart ts;
ts = *(ThreadStart*)v;
free(v);
crosscall1(ts.fn, setg_gcc, (void*)ts.g);
return nil;
}

View File

@@ -0,0 +1,139 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include <limits.h>
#include <pthread.h>
#include <signal.h>
#include <string.h> /* for strerror */
#include <sys/param.h>
#include <unistd.h>
#include <stdlib.h>
#include "libcgo.h"
#include "libcgo_unix.h"
#include <TargetConditionals.h>
#if TARGET_OS_IPHONE
#include <CoreFoundation/CFBundle.h>
#include <CoreFoundation/CFString.h>
#endif
static void *threadentry(void*);
static void (*setg_gcc)(void*);
void
_cgo_sys_thread_start(ThreadStart *ts)
{
pthread_attr_t attr;
sigset_t ign, oset;
pthread_t p;
size_t size;
int err;
//fprintf(stderr, "runtime/cgo: _cgo_sys_thread_start: fn=%p, g=%p\n", ts->fn, ts->g); // debug
sigfillset(&ign);
pthread_sigmask(SIG_SETMASK, &ign, &oset);
size = pthread_get_stacksize_np(pthread_self());
pthread_attr_init(&attr);
pthread_attr_setstacksize(&attr, size);
// Leave stacklo=0 and set stackhi=size; mstart will do the rest.
ts->g->stackhi = size;
err = _cgo_try_pthread_create(&p, &attr, threadentry, ts);
pthread_sigmask(SIG_SETMASK, &oset, nil);
if (err != 0) {
fprintf(stderr, "runtime/cgo: pthread_create failed: %s\n", strerror(err));
abort();
}
}
extern void crosscall1(void (*fn)(void), void (*setg_gcc)(void*), void *g);
static void*
threadentry(void *v)
{
ThreadStart ts;
ts = *(ThreadStart*)v;
free(v);
#if TARGET_OS_IPHONE
darwin_arm_init_thread_exception_port();
#endif
crosscall1(ts.fn, setg_gcc, (void*)ts.g);
return nil;
}
#if TARGET_OS_IPHONE
// init_working_dir sets the current working directory to the app root.
// By default ios/arm64 processes start in "/".
static void
init_working_dir()
{
CFBundleRef bundle = CFBundleGetMainBundle();
if (bundle == NULL) {
fprintf(stderr, "runtime/cgo: no main bundle\n");
return;
}
CFURLRef url_ref = CFBundleCopyResourceURL(bundle, CFSTR("Info"), CFSTR("plist"), NULL);
if (url_ref == NULL) {
// No Info.plist found. It can happen on Corellium virtual devices.
return;
}
CFStringRef url_str_ref = CFURLGetString(url_ref);
char buf[MAXPATHLEN];
Boolean res = CFStringGetCString(url_str_ref, buf, sizeof(buf), kCFStringEncodingUTF8);
CFRelease(url_ref);
if (!res) {
fprintf(stderr, "runtime/cgo: cannot get URL string\n");
return;
}
// url is of the form "file:///path/to/Info.plist".
// strip it down to the working directory "/path/to".
int url_len = strlen(buf);
if (url_len < sizeof("file://")+sizeof("/Info.plist")) {
fprintf(stderr, "runtime/cgo: bad URL: %s\n", buf);
return;
}
buf[url_len-sizeof("/Info.plist")+1] = 0;
char *dir = &buf[0] + sizeof("file://")-1;
if (chdir(dir) != 0) {
fprintf(stderr, "runtime/cgo: chdir(%s) failed\n", dir);
}
// The test harness in go_ios_exec passes the relative working directory
// in the GoExecWrapperWorkingDirectory property of the app bundle.
CFStringRef wd_ref = CFBundleGetValueForInfoDictionaryKey(bundle, CFSTR("GoExecWrapperWorkingDirectory"));
if (wd_ref != NULL) {
if (!CFStringGetCString(wd_ref, buf, sizeof(buf), kCFStringEncodingUTF8)) {
fprintf(stderr, "runtime/cgo: cannot get GoExecWrapperWorkingDirectory string\n");
return;
}
if (chdir(buf) != 0) {
fprintf(stderr, "runtime/cgo: chdir(%s) failed\n", buf);
}
}
}
#endif // TARGET_OS_IPHONE
void
x_cgo_init(G *g, void (*setg)(void*))
{
//fprintf(stderr, "x_cgo_init = %p\n", &x_cgo_init); // aid debugging in presence of ASLR
setg_gcc = setg;
_cgo_set_stacklo(g, NULL);
#if TARGET_OS_IPHONE
darwin_arm_init_mach_exception_handler();
darwin_arm_init_thread_exception_port();
init_working_dir();
#endif
}

View File

@@ -0,0 +1,60 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include <sys/types.h>
#include <sys/signalvar.h>
#include <pthread.h>
#include <signal.h>
#include <string.h>
#include "libcgo.h"
#include "libcgo_unix.h"
static void* threadentry(void*);
static void (*setg_gcc)(void*);
void
x_cgo_init(G *g, void (*setg)(void*))
{
setg_gcc = setg;
_cgo_set_stacklo(g, NULL);
}
void
_cgo_sys_thread_start(ThreadStart *ts)
{
pthread_attr_t attr;
sigset_t ign, oset;
pthread_t p;
size_t size;
int err;
SIGFILLSET(ign);
pthread_sigmask(SIG_SETMASK, &ign, &oset);
pthread_attr_init(&attr);
pthread_attr_getstacksize(&attr, &size);
// Leave stacklo=0 and set stackhi=size; mstart will do the rest.
ts->g->stackhi = size;
err = _cgo_try_pthread_create(&p, &attr, threadentry, ts);
pthread_sigmask(SIG_SETMASK, &oset, nil);
if (err != 0) {
fatalf("pthread_create failed: %s", strerror(err));
}
}
extern void crosscall1(void (*fn)(void), void (*setg_gcc)(void*), void *g);
static void*
threadentry(void *v)
{
ThreadStart ts;
ts = *(ThreadStart*)v;
free(v);
crosscall1(ts.fn, setg_gcc, (void*)ts.g);
return nil;
}

View File

@@ -0,0 +1,23 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build aix || (!android && linux) || dragonfly || freebsd || netbsd || openbsd || solaris
#include <stdarg.h>
#include <stdio.h>
#include <stdlib.h>
#include "libcgo.h"
void
fatalf(const char* format, ...)
{
va_list ap;
fprintf(stderr, "runtime/cgo: ");
va_start(ap, format);
vfprintf(stderr, format, ap);
va_end(ap);
fprintf(stderr, "\n");
abort();
}

View File

@@ -0,0 +1,71 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build freebsd && (386 || arm || arm64 || riscv64)
#include <sys/types.h>
#include <sys/signalvar.h>
#include <machine/sysarch.h>
#include <pthread.h>
#include <signal.h>
#include <string.h>
#include "libcgo.h"
#include "libcgo_unix.h"
#ifdef ARM_TP_ADDRESS
// ARM_TP_ADDRESS is (ARM_VECTORS_HIGH + 0x1000) or 0xffff1000
// and is known to runtime.read_tls_fallback. Verify it with
// cpp.
#if ARM_TP_ADDRESS != 0xffff1000
#error Wrong ARM_TP_ADDRESS!
#endif
#endif
static void* threadentry(void*);
static void (*setg_gcc)(void*);
void
x_cgo_init(G *g, void (*setg)(void*))
{
setg_gcc = setg;
_cgo_set_stacklo(g, NULL);
}
void
_cgo_sys_thread_start(ThreadStart *ts)
{
pthread_attr_t attr;
sigset_t ign, oset;
pthread_t p;
size_t size;
int err;
SIGFILLSET(ign);
pthread_sigmask(SIG_SETMASK, &ign, &oset);
pthread_attr_init(&attr);
pthread_attr_getstacksize(&attr, &size);
// Leave stacklo=0 and set stackhi=size; mstart will do the rest.
ts->g->stackhi = size;
err = _cgo_try_pthread_create(&p, &attr, threadentry, ts);
pthread_sigmask(SIG_SETMASK, &oset, nil);
if (err != 0) {
fatalf("pthread_create failed: %s", strerror(err));
}
}
extern void crosscall1(void (*fn)(void), void (*setg_gcc)(void*), void *g);
static void*
threadentry(void *v)
{
ThreadStart ts;
ts = *(ThreadStart*)v;
free(v);
crosscall1(ts.fn, setg_gcc, ts.g);
return nil;
}

View File

@@ -0,0 +1,71 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include <sys/types.h>
#include <errno.h>
#include <sys/signalvar.h>
#include <pthread.h>
#include <signal.h>
#include <string.h>
#include "libcgo.h"
#include "libcgo_unix.h"
static void* threadentry(void*);
static void (*setg_gcc)(void*);
void
x_cgo_init(G *g, void (*setg)(void*))
{
uintptr *pbounds;
// Deal with memory sanitizer/clang interaction.
// See gcc_linux_amd64.c for details.
setg_gcc = setg;
pbounds = (uintptr*)malloc(2 * sizeof(uintptr));
if (pbounds == NULL) {
fatalf("malloc failed: %s", strerror(errno));
}
_cgo_set_stacklo(g, pbounds);
free(pbounds);
}
void
_cgo_sys_thread_start(ThreadStart *ts)
{
pthread_attr_t attr;
sigset_t ign, oset;
pthread_t p;
size_t size;
int err;
SIGFILLSET(ign);
pthread_sigmask(SIG_SETMASK, &ign, &oset);
pthread_attr_init(&attr);
pthread_attr_getstacksize(&attr, &size);
// Leave stacklo=0 and set stackhi=size; mstart will do the rest.
ts->g->stackhi = size;
err = _cgo_try_pthread_create(&p, &attr, threadentry, ts);
pthread_sigmask(SIG_SETMASK, &oset, nil);
if (err != 0) {
fatalf("pthread_create failed: %s", strerror(err));
}
}
extern void crosscall1(void (*fn)(void), void (*setg_gcc)(void*), void *g);
static void*
threadentry(void *v)
{
ThreadStart ts;
ts = *(ThreadStart*)v;
_cgo_tsan_acquire();
free(v);
_cgo_tsan_release();
crosscall1(ts.fn, setg_gcc, (void*)ts.g);
return nil;
}

View File

@@ -0,0 +1,80 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build freebsd && amd64
#include <errno.h>
#include <stddef.h>
#include <stdint.h>
#include <string.h>
#include <signal.h>
#include "libcgo.h"
// go_sigaction_t is a C version of the sigactiont struct from
// os_freebsd.go. This definition — and its conversion to and from struct
// sigaction — are specific to freebsd/amd64.
typedef struct {
uint32_t __bits[_SIG_WORDS];
} go_sigset_t;
typedef struct {
uintptr_t handler;
int32_t flags;
go_sigset_t mask;
} go_sigaction_t;
int32_t
x_cgo_sigaction(intptr_t signum, const go_sigaction_t *goact, go_sigaction_t *oldgoact) {
int32_t ret;
struct sigaction act;
struct sigaction oldact;
size_t i;
_cgo_tsan_acquire();
memset(&act, 0, sizeof act);
memset(&oldact, 0, sizeof oldact);
if (goact) {
if (goact->flags & SA_SIGINFO) {
act.sa_sigaction = (void(*)(int, siginfo_t*, void*))(goact->handler);
} else {
act.sa_handler = (void(*)(int))(goact->handler);
}
sigemptyset(&act.sa_mask);
for (i = 0; i < 8 * sizeof(goact->mask); i++) {
if (goact->mask.__bits[i/32] & ((uint32_t)(1)<<(i&31))) {
sigaddset(&act.sa_mask, i+1);
}
}
act.sa_flags = goact->flags;
}
ret = sigaction(signum, goact ? &act : NULL, oldgoact ? &oldact : NULL);
if (ret == -1) {
// runtime.sigaction expects _cgo_sigaction to return errno on error.
_cgo_tsan_release();
return errno;
}
if (oldgoact) {
if (oldact.sa_flags & SA_SIGINFO) {
oldgoact->handler = (uintptr_t)(oldact.sa_sigaction);
} else {
oldgoact->handler = (uintptr_t)(oldact.sa_handler);
}
for (i = 0 ; i < _SIG_WORDS; i++) {
oldgoact->mask.__bits[i] = 0;
}
for (i = 0; i < 8 * sizeof(oldgoact->mask); i++) {
if (sigismember(&oldact.sa_mask, i+1) == 1) {
oldgoact->mask.__bits[i/32] |= (uint32_t)(1)<<(i&31);
}
}
oldgoact->flags = oldact.sa_flags;
}
_cgo_tsan_release();
return ret;
}

View File

@@ -0,0 +1,178 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build unix
// When cross-compiling with clang to linux/armv5, atomics are emulated
// and cause a compiler warning. This results in a build failure since
// cgo uses -Werror. See #65290.
#pragma GCC diagnostic ignored "-Wpragmas"
#pragma GCC diagnostic ignored "-Wunknown-warning-option"
#pragma GCC diagnostic ignored "-Watomic-alignment"
#include <pthread.h>
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h> // strerror
#include <time.h>
#include "libcgo.h"
#include "libcgo_unix.h"
static pthread_cond_t runtime_init_cond = PTHREAD_COND_INITIALIZER;
static pthread_mutex_t runtime_init_mu = PTHREAD_MUTEX_INITIALIZER;
static int runtime_init_done;
// pthread_g is a pthread specific key, for storing the g that binded to the C thread.
// The registered pthread_key_destructor will dropm, when the pthread-specified value g is not NULL,
// while a C thread is exiting.
static pthread_key_t pthread_g;
static void pthread_key_destructor(void* g);
uintptr_t x_cgo_pthread_key_created;
void (*x_crosscall2_ptr)(void (*fn)(void *), void *, int, size_t);
// The context function, used when tracing back C calls into Go.
static void (*cgo_context_function)(struct context_arg*);
void
x_cgo_sys_thread_create(void* (*func)(void*), void* arg) {
pthread_t p;
int err = _cgo_try_pthread_create(&p, NULL, func, arg);
if (err != 0) {
fprintf(stderr, "pthread_create failed: %s", strerror(err));
abort();
}
}
uintptr_t
_cgo_wait_runtime_init_done(void) {
void (*pfn)(struct context_arg*);
pfn = __atomic_load_n(&cgo_context_function, __ATOMIC_CONSUME);
int done = 2;
if (__atomic_load_n(&runtime_init_done, __ATOMIC_CONSUME) != done) {
pthread_mutex_lock(&runtime_init_mu);
while (__atomic_load_n(&runtime_init_done, __ATOMIC_CONSUME) == 0) {
pthread_cond_wait(&runtime_init_cond, &runtime_init_mu);
}
// The key and x_cgo_pthread_key_created are for the whole program,
// whereas the specific and destructor is per thread.
if (x_cgo_pthread_key_created == 0 && pthread_key_create(&pthread_g, pthread_key_destructor) == 0) {
x_cgo_pthread_key_created = 1;
}
// TODO(iant): For the case of a new C thread calling into Go, such
// as when using -buildmode=c-archive, we know that Go runtime
// initialization is complete but we do not know that all Go init
// functions have been run. We should not fetch cgo_context_function
// until they have been, because that is where a call to
// SetCgoTraceback is likely to occur. We are going to wait for Go
// initialization to be complete anyhow, later, by waiting for
// main_init_done to be closed in cgocallbackg1. We should wait here
// instead. See also issue #15943.
pfn = __atomic_load_n(&cgo_context_function, __ATOMIC_CONSUME);
__atomic_store_n(&runtime_init_done, done, __ATOMIC_RELEASE);
pthread_mutex_unlock(&runtime_init_mu);
}
if (pfn != nil) {
struct context_arg arg;
arg.Context = 0;
(*pfn)(&arg);
return arg.Context;
}
return 0;
}
// _cgo_set_stacklo sets g->stacklo based on the stack size.
// This is common code called from x_cgo_init, which is itself
// called by rt0_go in the runtime package.
void _cgo_set_stacklo(G *g, uintptr *pbounds)
{
uintptr bounds[2];
// pbounds can be passed in by the caller; see gcc_linux_amd64.c.
if (pbounds == NULL) {
pbounds = &bounds[0];
}
x_cgo_getstackbound(pbounds);
g->stacklo = *pbounds;
// Sanity check the results now, rather than getting a
// morestack on g0 crash.
if (g->stacklo >= g->stackhi) {
fprintf(stderr, "runtime/cgo: bad stack bounds: lo=%p hi=%p\n", (void*)(g->stacklo), (void*)(g->stackhi));
abort();
}
}
// Store the g into a thread-specific value associated with the pthread key pthread_g.
// And pthread_key_destructor will dropm when the thread is exiting.
void x_cgo_bindm(void* g) {
// We assume this will always succeed, otherwise, there might be extra M leaking,
// when a C thread exits after a cgo call.
// We only invoke this function once per thread in runtime.needAndBindM,
// and the next calls just reuse the bound m.
pthread_setspecific(pthread_g, g);
}
void
x_cgo_notify_runtime_init_done(void* dummy __attribute__ ((unused))) {
pthread_mutex_lock(&runtime_init_mu);
__atomic_store_n(&runtime_init_done, 1, __ATOMIC_RELEASE);
pthread_cond_broadcast(&runtime_init_cond);
pthread_mutex_unlock(&runtime_init_mu);
}
// Sets the context function to call to record the traceback context
// when calling a Go function from C code. Called from runtime.SetCgoTraceback.
void x_cgo_set_context_function(void (*context)(struct context_arg*)) {
__atomic_store_n(&cgo_context_function, context, __ATOMIC_RELEASE);
}
// Gets the context function.
void (*(_cgo_get_context_function(void)))(struct context_arg*) {
return __atomic_load_n(&cgo_context_function, __ATOMIC_CONSUME);
}
// _cgo_try_pthread_create retries pthread_create if it fails with
// EAGAIN.
int
_cgo_try_pthread_create(pthread_t* thread, const pthread_attr_t* attr, void* (*pfn)(void*), void* arg) {
int tries;
int err;
struct timespec ts;
for (tries = 0; tries < 20; tries++) {
err = pthread_create(thread, attr, pfn, arg);
if (err == 0) {
pthread_detach(*thread);
return 0;
}
if (err != EAGAIN) {
return err;
}
ts.tv_sec = 0;
ts.tv_nsec = (tries + 1) * 1000 * 1000; // Milliseconds.
nanosleep(&ts, nil);
}
return EAGAIN;
}
static void
pthread_key_destructor(void* g) {
if (x_crosscall2_ptr != NULL) {
// fn == NULL means dropm.
// We restore g by using the stored g, before dropm in runtime.cgocallback,
// since the g stored in the TLS by Go might be cleared in some platforms,
// before this destructor invoked.
x_crosscall2_ptr(NULL, g, 0, 0);
}
}

View File

@@ -0,0 +1,158 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#define WIN32_LEAN_AND_MEAN
#include <windows.h>
#include <process.h>
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include "libcgo.h"
#include "libcgo_windows.h"
// Ensure there's one symbol marked __declspec(dllexport).
// If there are no exported symbols, the unfortunate behavior of
// the binutils linker is to also strip the relocations table,
// resulting in non-PIE binary. The other option is the
// --export-all-symbols flag, but we don't need to export all symbols
// and this may overflow the export table (#40795).
// See https://sourceware.org/bugzilla/show_bug.cgi?id=19011
__declspec(dllexport) int _cgo_dummy_export;
static volatile LONG runtime_init_once_gate = 0;
static volatile LONG runtime_init_once_done = 0;
static CRITICAL_SECTION runtime_init_cs;
static HANDLE runtime_init_wait;
static int runtime_init_done;
uintptr_t x_cgo_pthread_key_created;
void (*x_crosscall2_ptr)(void (*fn)(void *), void *, int, size_t);
// Pre-initialize the runtime synchronization objects
void
_cgo_preinit_init() {
runtime_init_wait = CreateEvent(NULL, TRUE, FALSE, NULL);
if (runtime_init_wait == NULL) {
fprintf(stderr, "runtime: failed to create runtime initialization wait event.\n");
abort();
}
InitializeCriticalSection(&runtime_init_cs);
}
// Make sure that the preinit sequence has run.
void
_cgo_maybe_run_preinit() {
if (!InterlockedExchangeAdd(&runtime_init_once_done, 0)) {
if (InterlockedIncrement(&runtime_init_once_gate) == 1) {
_cgo_preinit_init();
InterlockedIncrement(&runtime_init_once_done);
} else {
// Decrement to avoid overflow.
InterlockedDecrement(&runtime_init_once_gate);
while(!InterlockedExchangeAdd(&runtime_init_once_done, 0)) {
Sleep(0);
}
}
}
}
void
x_cgo_sys_thread_create(void (*func)(void*), void* arg) {
_cgo_beginthread(func, arg);
}
int
_cgo_is_runtime_initialized() {
EnterCriticalSection(&runtime_init_cs);
int status = runtime_init_done;
LeaveCriticalSection(&runtime_init_cs);
return status;
}
uintptr_t
_cgo_wait_runtime_init_done(void) {
void (*pfn)(struct context_arg*);
_cgo_maybe_run_preinit();
while (!_cgo_is_runtime_initialized()) {
WaitForSingleObject(runtime_init_wait, INFINITE);
}
pfn = _cgo_get_context_function();
if (pfn != nil) {
struct context_arg arg;
arg.Context = 0;
(*pfn)(&arg);
return arg.Context;
}
return 0;
}
// Should not be used since x_cgo_pthread_key_created will always be zero.
void x_cgo_bindm(void* dummy) {
fprintf(stderr, "unexpected cgo_bindm on Windows\n");
abort();
}
void
x_cgo_notify_runtime_init_done(void* dummy) {
_cgo_maybe_run_preinit();
EnterCriticalSection(&runtime_init_cs);
runtime_init_done = 1;
LeaveCriticalSection(&runtime_init_cs);
if (!SetEvent(runtime_init_wait)) {
fprintf(stderr, "runtime: failed to signal runtime initialization complete.\n");
abort();
}
}
// The context function, used when tracing back C calls into Go.
static void (*cgo_context_function)(struct context_arg*);
// Sets the context function to call to record the traceback context
// when calling a Go function from C code. Called from runtime.SetCgoTraceback.
void x_cgo_set_context_function(void (*context)(struct context_arg*)) {
EnterCriticalSection(&runtime_init_cs);
cgo_context_function = context;
LeaveCriticalSection(&runtime_init_cs);
}
// Gets the context function.
void (*(_cgo_get_context_function(void)))(struct context_arg*) {
void (*ret)(struct context_arg*);
EnterCriticalSection(&runtime_init_cs);
ret = cgo_context_function;
LeaveCriticalSection(&runtime_init_cs);
return ret;
}
void _cgo_beginthread(void (*func)(void*), void* arg) {
int tries;
uintptr_t thandle;
for (tries = 0; tries < 20; tries++) {
thandle = _beginthread(func, 0, arg);
if (thandle == -1 && errno == EACCES) {
// "Insufficient resources", try again in a bit.
//
// Note that the first Sleep(0) is a yield.
Sleep(tries); // milliseconds
continue;
} else if (thandle == -1) {
break;
}
return; // Success!
}
fprintf(stderr, "runtime: failed to create new OS thread (%d)\n", errno);
abort();
}

View File

@@ -0,0 +1,66 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build linux && (386 || arm || loong64 || mips || mipsle || mips64 || mips64le || riscv64)
#include <pthread.h>
#include <string.h>
#include <signal.h>
#include "libcgo.h"
#include "libcgo_unix.h"
static void *threadentry(void*);
void (*x_cgo_inittls)(void **tlsg, void **tlsbase) __attribute__((common));
static void (*setg_gcc)(void*);
void
x_cgo_init(G *g, void (*setg)(void*), void **tlsg, void **tlsbase)
{
setg_gcc = setg;
_cgo_set_stacklo(g, NULL);
if (x_cgo_inittls) {
x_cgo_inittls(tlsg, tlsbase);
}
}
void
_cgo_sys_thread_start(ThreadStart *ts)
{
pthread_attr_t attr;
sigset_t ign, oset;
pthread_t p;
size_t size;
int err;
sigfillset(&ign);
pthread_sigmask(SIG_SETMASK, &ign, &oset);
pthread_attr_init(&attr);
pthread_attr_getstacksize(&attr, &size);
// Leave stacklo=0 and set stackhi=size; mstart will do the rest.
ts->g->stackhi = size;
err = _cgo_try_pthread_create(&p, &attr, threadentry, ts);
pthread_sigmask(SIG_SETMASK, &oset, nil);
if (err != 0) {
fatalf("pthread_create failed: %s", strerror(err));
}
}
extern void crosscall1(void (*fn)(void), void (*setg_gcc)(void*), void *g);
static void*
threadentry(void *v)
{
ThreadStart ts;
ts = *(ThreadStart*)v;
free(v);
crosscall1(ts.fn, setg_gcc, ts.g);
return nil;
}

View File

@@ -0,0 +1,91 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include <pthread.h>
#include <errno.h>
#include <string.h> // strerror
#include <signal.h>
#include <stdlib.h>
#include "libcgo.h"
#include "libcgo_unix.h"
static void* threadentry(void*);
static void (*setg_gcc)(void*);
// This will be set in gcc_android.c for android-specific customization.
void (*x_cgo_inittls)(void **tlsg, void **tlsbase) __attribute__((common));
void
x_cgo_init(G *g, void (*setg)(void*), void **tlsg, void **tlsbase)
{
uintptr *pbounds;
/* The memory sanitizer distributed with versions of clang
before 3.8 has a bug: if you call mmap before malloc, mmap
may return an address that is later overwritten by the msan
library. Avoid this problem by forcing a call to malloc
here, before we ever call malloc.
This is only required for the memory sanitizer, so it's
unfortunate that we always run it. It should be possible
to remove this when we no longer care about versions of
clang before 3.8. The test for this is
misc/cgo/testsanitizers.
GCC works hard to eliminate a seemingly unnecessary call to
malloc, so we actually use the memory we allocate. */
setg_gcc = setg;
pbounds = (uintptr*)malloc(2 * sizeof(uintptr));
if (pbounds == NULL) {
fatalf("malloc failed: %s", strerror(errno));
}
_cgo_set_stacklo(g, pbounds);
free(pbounds);
if (x_cgo_inittls) {
x_cgo_inittls(tlsg, tlsbase);
}
}
void
_cgo_sys_thread_start(ThreadStart *ts)
{
pthread_attr_t attr;
sigset_t ign, oset;
pthread_t p;
size_t size;
int err;
sigfillset(&ign);
pthread_sigmask(SIG_SETMASK, &ign, &oset);
pthread_attr_init(&attr);
pthread_attr_getstacksize(&attr, &size);
// Leave stacklo=0 and set stackhi=size; mstart will do the rest.
ts->g->stackhi = size;
err = _cgo_try_pthread_create(&p, &attr, threadentry, ts);
pthread_sigmask(SIG_SETMASK, &oset, nil);
if (err != 0) {
fatalf("pthread_create failed: %s", strerror(err));
}
}
extern void crosscall1(void (*fn)(void), void (*setg_gcc)(void*), void *g);
static void*
threadentry(void *v)
{
ThreadStart ts;
ts = *(ThreadStart*)v;
_cgo_tsan_acquire();
free(v);
_cgo_tsan_release();
crosscall1(ts.fn, setg_gcc, (void*)ts.g);
return nil;
}

View File

@@ -0,0 +1,87 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include <pthread.h>
#include <errno.h>
#include <string.h>
#include <signal.h>
#include <stdlib.h>
#include "libcgo.h"
#include "libcgo_unix.h"
static void *threadentry(void*);
void (*x_cgo_inittls)(void **tlsg, void **tlsbase) __attribute__((common));
static void (*setg_gcc)(void*);
void
_cgo_sys_thread_start(ThreadStart *ts)
{
pthread_attr_t attr;
sigset_t ign, oset;
pthread_t p;
size_t size;
int err;
sigfillset(&ign);
pthread_sigmask(SIG_SETMASK, &ign, &oset);
pthread_attr_init(&attr);
pthread_attr_getstacksize(&attr, &size);
// Leave stacklo=0 and set stackhi=size; mstart will do the rest.
ts->g->stackhi = size;
err = _cgo_try_pthread_create(&p, &attr, threadentry, ts);
pthread_sigmask(SIG_SETMASK, &oset, nil);
if (err != 0) {
fatalf("pthread_create failed: %s", strerror(err));
}
}
extern void crosscall1(void (*fn)(void), void (*setg_gcc)(void*), void *g);
static void*
threadentry(void *v)
{
ThreadStart ts;
ts = *(ThreadStart*)v;
free(v);
crosscall1(ts.fn, setg_gcc, (void*)ts.g);
return nil;
}
void
x_cgo_init(G *g, void (*setg)(void*), void **tlsg, void **tlsbase)
{
uintptr *pbounds;
/* The memory sanitizer distributed with versions of clang
before 3.8 has a bug: if you call mmap before malloc, mmap
may return an address that is later overwritten by the msan
library. Avoid this problem by forcing a call to malloc
here, before we ever call malloc.
This is only required for the memory sanitizer, so it's
unfortunate that we always run it. It should be possible
to remove this when we no longer care about versions of
clang before 3.8. The test for this is
misc/cgo/testsanitizers.
GCC works hard to eliminate a seemingly unnecessary call to
malloc, so we actually use the memory we allocate. */
setg_gcc = setg;
pbounds = (uintptr*)malloc(2 * sizeof(uintptr));
if (pbounds == NULL) {
fatalf("malloc failed: %s", strerror(errno));
}
_cgo_set_stacklo(g, pbounds);
free(pbounds);
if (x_cgo_inittls) {
x_cgo_inittls(tlsg, tlsbase);
}
}

View File

@@ -0,0 +1,86 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build linux && (ppc64 || ppc64le)
.file "gcc_linux_ppc64x.S"
// Define a frame which has no argument space, but is compatible with
// a call into a Go ABI. We allocate 32B to match FIXED_FRAME with
// similar semantics, except we store the backchain pointer, not the
// LR at offset 0. R2 is stored in the Go TOC save slot (offset 24).
.set GPR_OFFSET, 32
.set FPR_OFFSET, GPR_OFFSET + 18*8
.set VR_OFFSET, FPR_OFFSET + 18*8
.set FRAME_SIZE, VR_OFFSET + 12*16
.macro FOR_EACH_GPR opcode r=14
.ifge 31 - \r
\opcode \r, GPR_OFFSET + 8*(\r-14)(1)
FOR_EACH_GPR \opcode "(\r+1)"
.endif
.endm
.macro FOR_EACH_FPR opcode fr=14
.ifge 31 - \fr
\opcode \fr, FPR_OFFSET + 8*(\fr-14)(1)
FOR_EACH_FPR \opcode "(\fr+1)"
.endif
.endm
.macro FOR_EACH_VR opcode vr=20
.ifge 31 - \vr
li 0, VR_OFFSET + 16*(\vr-20)
\opcode \vr, 1, 0
FOR_EACH_VR \opcode "(\vr+1)"
.endif
.endm
/*
* void crosscall_ppc64(void (*fn)(void), void *g)
*
* Calling into the gc tool chain, where all registers are caller save.
* Called from standard ppc64 C ABI, where r2, r14-r31, f14-f31 are
* callee-save, so they must be saved explicitly.
*/
.globl crosscall_ppc64
crosscall_ppc64:
// Start with standard C stack frame layout and linkage
mflr %r0
std %r0, 16(%r1) // Save LR in caller's frame
mfcr %r0
std %r0, 8(%r1) // Save CR in caller's frame
stdu %r1, -FRAME_SIZE(%r1)
std %r2, 24(%r1)
FOR_EACH_GPR std
FOR_EACH_FPR stfd
FOR_EACH_VR stvx
// Set up Go ABI constant registers
li %r0, 0
// Restore g pointer (r30 in Go ABI, which may have been clobbered by C)
mr %r30, %r4
// Call fn
mr %r12, %r3
mtctr %r3
bctrl
FOR_EACH_GPR ld
FOR_EACH_FPR lfd
FOR_EACH_VR lvx
ld %r2, 24(%r1)
addi %r1, %r1, FRAME_SIZE
ld %r0, 16(%r1)
mtlr %r0
ld %r0, 8(%r1)
mtcr %r0
blr
#ifdef __ELF__
.section .note.GNU-stack,"",%progbits
#endif

View File

@@ -0,0 +1,63 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include <pthread.h>
#include <string.h>
#include <signal.h>
#include "libcgo.h"
#include "libcgo_unix.h"
static void *threadentry(void*);
void (*x_cgo_inittls)(void **tlsg, void **tlsbase);
static void (*setg_gcc)(void*);
void
x_cgo_init(G *g, void (*setg)(void*), void **tlsbase)
{
setg_gcc = setg;
_cgo_set_stacklo(g, NULL);
}
void
_cgo_sys_thread_start(ThreadStart *ts)
{
pthread_attr_t attr;
sigset_t ign, oset;
pthread_t p;
size_t size;
int err;
sigfillset(&ign);
pthread_sigmask(SIG_SETMASK, &ign, &oset);
pthread_attr_init(&attr);
pthread_attr_getstacksize(&attr, &size);
// Leave stacklo=0 and set stackhi=size; mstart will do the rest.
ts->g->stackhi = size;
err = _cgo_try_pthread_create(&p, &attr, threadentry, ts);
pthread_sigmask(SIG_SETMASK, &oset, nil);
if (err != 0) {
fatalf("pthread_create failed: %s", strerror(err));
}
}
extern void crosscall_s390x(void (*fn)(void), void *g);
static void*
threadentry(void *v)
{
ThreadStart ts;
ts = *(ThreadStart*)v;
free(v);
// Save g for this thread in C TLS
setg_gcc((void*)ts.g);
crosscall_s390x(ts.fn, (void*)ts.g);
return nil;
}

View File

@@ -0,0 +1,67 @@
// Copyright 2022 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
.file "gcc_loong64.S"
/*
* void crosscall1(void (*fn)(void), void (*setg_gcc)(void *g), void *g)
*
* Calling into the gc tool chain, where all registers are caller save.
* Called from standard lp64d ABI, where $r1, $r3, $r23-$r30, and $f24-$f31
* are callee-save, so they must be saved explicitly, along with $r1 (LR).
*/
.globl crosscall1
crosscall1:
addi.d $r3, $r3, -160
st.d $r1, $r3, 0
st.d $r23, $r3, 8
st.d $r24, $r3, 16
st.d $r25, $r3, 24
st.d $r26, $r3, 32
st.d $r27, $r3, 40
st.d $r28, $r3, 48
st.d $r29, $r3, 56
st.d $r30, $r3, 64
st.d $r2, $r3, 72
st.d $r22, $r3, 80
fst.d $f24, $r3, 88
fst.d $f25, $r3, 96
fst.d $f26, $r3, 104
fst.d $f27, $r3, 112
fst.d $f28, $r3, 120
fst.d $f29, $r3, 128
fst.d $f30, $r3, 136
fst.d $f31, $r3, 144
move $r18, $r4 // save R4
move $r19, $r6
jirl $r1, $r5, 0 // call setg_gcc (clobbers R4)
jirl $r1, $r18, 0 // call fn
ld.d $r23, $r3, 8
ld.d $r24, $r3, 16
ld.d $r25, $r3, 24
ld.d $r26, $r3, 32
ld.d $r27, $r3, 40
ld.d $r28, $r3, 48
ld.d $r29, $r3, 56
ld.d $r30, $r3, 64
ld.d $r2, $r3, 72
ld.d $r22, $r3, 80
fld.d $f24, $r3, 88
fld.d $f25, $r3, 96
fld.d $f26, $r3, 104
fld.d $f27, $r3, 112
fld.d $f28, $r3, 120
fld.d $f29, $r3, 128
fld.d $f30, $r3, 136
fld.d $f31, $r3, 144
ld.d $r1, $r3, 0
addi.d $r3, $r3, 160
jirl $r0, $r1, 0
#ifdef __ELF__
.section .note.GNU-stack,"",%progbits
#endif

View File

@@ -0,0 +1,89 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build mips64 || mips64le
.file "gcc_mips64x.S"
/*
* void crosscall1(void (*fn)(void), void (*setg_gcc)(void *g), void *g)
*
* Calling into the gc tool chain, where all registers are caller save.
* Called from standard MIPS N64 ABI, where $16-$23, $28, $30, and $f24-$f31
* are callee-save, so they must be saved explicitly, along with $31 (LR).
*/
.globl crosscall1
.set noat
crosscall1:
#ifndef __mips_soft_float
daddiu $29, $29, -160
#else
daddiu $29, $29, -96 // For soft-float, no need to make room for FP registers
#endif
sd $31, 0($29)
sd $16, 8($29)
sd $17, 16($29)
sd $18, 24($29)
sd $19, 32($29)
sd $20, 40($29)
sd $21, 48($29)
sd $22, 56($29)
sd $23, 64($29)
sd $28, 72($29)
sd $30, 80($29)
#ifndef __mips_soft_float
sdc1 $f24, 88($29)
sdc1 $f25, 96($29)
sdc1 $f26, 104($29)
sdc1 $f27, 112($29)
sdc1 $f28, 120($29)
sdc1 $f29, 128($29)
sdc1 $f30, 136($29)
sdc1 $f31, 144($29)
#endif
// prepare SB register = pc & 0xffffffff00000000
bal 1f
1:
dsrl $28, $31, 32
dsll $28, $28, 32
move $20, $4 // save R4
move $1, $6
jalr $5 // call setg_gcc (clobbers R4)
jalr $20 // call fn
ld $16, 8($29)
ld $17, 16($29)
ld $18, 24($29)
ld $19, 32($29)
ld $20, 40($29)
ld $21, 48($29)
ld $22, 56($29)
ld $23, 64($29)
ld $28, 72($29)
ld $30, 80($29)
#ifndef __mips_soft_float
ldc1 $f24, 88($29)
ldc1 $f25, 96($29)
ldc1 $f26, 104($29)
ldc1 $f27, 112($29)
ldc1 $f28, 120($29)
ldc1 $f29, 128($29)
ldc1 $f30, 136($29)
ldc1 $f31, 144($29)
#endif
ld $31, 0($29)
#ifndef __mips_soft_float
daddiu $29, $29, 160
#else
daddiu $29, $29, 96
#endif
jr $31
.set at
#ifdef __ELF__
.section .note.GNU-stack,"",%progbits
#endif

View File

@@ -0,0 +1,77 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build mips || mipsle
.file "gcc_mipsx.S"
/*
* void crosscall1(void (*fn)(void), void (*setg_gcc)(void *g), void *g)
*
* Calling into the gc tool chain, where all registers are caller save.
* Called from standard MIPS O32 ABI, where $16-$23, $30, and $f20-$f31
* are callee-save, so they must be saved explicitly, along with $31 (LR).
*/
.globl crosscall1
.set noat
crosscall1:
#ifndef __mips_soft_float
addiu $29, $29, -88
#else
addiu $29, $29, -40 // For soft-float, no need to make room for FP registers
#endif
sw $31, 0($29)
sw $16, 4($29)
sw $17, 8($29)
sw $18, 12($29)
sw $19, 16($29)
sw $20, 20($29)
sw $21, 24($29)
sw $22, 28($29)
sw $23, 32($29)
sw $30, 36($29)
#ifndef __mips_soft_float
sdc1 $f20, 40($29)
sdc1 $f22, 48($29)
sdc1 $f24, 56($29)
sdc1 $f26, 64($29)
sdc1 $f28, 72($29)
sdc1 $f30, 80($29)
#endif
move $20, $4 // save R4
move $4, $6
jalr $5 // call setg_gcc
jalr $20 // call fn
lw $16, 4($29)
lw $17, 8($29)
lw $18, 12($29)
lw $19, 16($29)
lw $20, 20($29)
lw $21, 24($29)
lw $22, 28($29)
lw $23, 32($29)
lw $30, 36($29)
#ifndef __mips_soft_float
ldc1 $f20, 40($29)
ldc1 $f22, 48($29)
ldc1 $f24, 56($29)
ldc1 $f26, 64($29)
ldc1 $f28, 72($29)
ldc1 $f30, 80($29)
#endif
lw $31, 0($29)
#ifndef __mips_soft_float
addiu $29, $29, 88
#else
addiu $29, $29, 40
#endif
jr $31
.set at
#ifdef __ELF__
.section .note.GNU-stack,"",%progbits
#endif

View File

@@ -0,0 +1,39 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build (linux && (amd64 || arm64 || loong64 || ppc64le)) || (freebsd && amd64)
#include <errno.h>
#include <stdint.h>
#include <stdlib.h>
#include <sys/mman.h>
#include "libcgo.h"
uintptr_t
x_cgo_mmap(void *addr, uintptr_t length, int32_t prot, int32_t flags, int32_t fd, uint32_t offset) {
void *p;
_cgo_tsan_acquire();
p = mmap(addr, length, prot, flags, fd, offset);
_cgo_tsan_release();
if (p == MAP_FAILED) {
/* This is what the Go code expects on failure. */
return (uintptr_t)errno;
}
return (uintptr_t)p;
}
void
x_cgo_munmap(void *addr, uintptr_t length) {
int r;
_cgo_tsan_acquire();
r = munmap(addr, length);
_cgo_tsan_release();
if (r < 0) {
/* The Go runtime is not prepared for munmap to fail. */
abort();
}
}

View File

@@ -0,0 +1,73 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build netbsd && (386 || amd64 || arm || arm64)
#include <sys/types.h>
#include <pthread.h>
#include <signal.h>
#include <string.h>
#include "libcgo.h"
#include "libcgo_unix.h"
static void* threadentry(void*);
static void (*setg_gcc)(void*);
void
x_cgo_init(G *g, void (*setg)(void*))
{
setg_gcc = setg;
_cgo_set_stacklo(g, NULL);
}
void
_cgo_sys_thread_start(ThreadStart *ts)
{
pthread_attr_t attr;
sigset_t ign, oset;
pthread_t p;
size_t size;
int err;
sigfillset(&ign);
pthread_sigmask(SIG_SETMASK, &ign, &oset);
pthread_attr_init(&attr);
pthread_attr_getstacksize(&attr, &size);
// Leave stacklo=0 and set stackhi=size; mstart will do the rest.
ts->g->stackhi = size;
err = _cgo_try_pthread_create(&p, &attr, threadentry, ts);
pthread_sigmask(SIG_SETMASK, &oset, nil);
if (err != 0) {
fatalf("pthread_create failed: %s", strerror(err));
}
}
extern void crosscall1(void (*fn)(void), void (*setg_gcc)(void*), void *g);
static void*
threadentry(void *v)
{
ThreadStart ts;
stack_t ss;
ts = *(ThreadStart*)v;
free(v);
// On NetBSD, a new thread inherits the signal stack of the
// creating thread. That confuses minit, so we remove that
// signal stack here before calling the regular mstart. It's
// a bit baroque to remove a signal stack here only to add one
// in minit, but it's a simple change that keeps NetBSD
// working like other OS's. At this point all signals are
// blocked, so there is no race.
memset(&ss, 0, sizeof ss);
ss.ss_flags = SS_DISABLE;
sigaltstack(&ss, nil);
crosscall1(ts.fn, setg_gcc, ts.g);
return nil;
}

View File

@@ -0,0 +1,61 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build openbsd && (386 || arm || amd64 || arm64 || riscv64)
#include <sys/types.h>
#include <pthread.h>
#include <signal.h>
#include <string.h>
#include "libcgo.h"
#include "libcgo_unix.h"
static void* threadentry(void*);
static void (*setg_gcc)(void*);
void
x_cgo_init(G *g, void (*setg)(void*))
{
setg_gcc = setg;
_cgo_set_stacklo(g, NULL);
}
void
_cgo_sys_thread_start(ThreadStart *ts)
{
pthread_attr_t attr;
sigset_t ign, oset;
pthread_t p;
size_t size;
int err;
sigfillset(&ign);
pthread_sigmask(SIG_SETMASK, &ign, &oset);
pthread_attr_init(&attr);
pthread_attr_getstacksize(&attr, &size);
// Leave stacklo=0 and set stackhi=size; mstart will do the rest.
ts->g->stackhi = size;
err = _cgo_try_pthread_create(&p, &attr, threadentry, ts);
pthread_sigmask(SIG_SETMASK, &oset, nil);
if (err != 0) {
fatalf("pthread_create failed: %s", strerror(err));
}
}
extern void crosscall1(void (*fn)(void), void (*setg_gcc)(void*), void *g);
static void*
threadentry(void *v)
{
ThreadStart ts;
ts = *(ThreadStart*)v;
free(v);
crosscall1(ts.fn, setg_gcc, ts.g);
return nil;
}

View File

@@ -0,0 +1,67 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build ppc64 || ppc64le
#include <pthread.h>
#include <string.h>
#include <signal.h>
#include "libcgo.h"
#include "libcgo_unix.h"
static void *threadentry(void*);
void (*x_cgo_inittls)(void **tlsg, void **tlsbase);
static void (*setg_gcc)(void*);
void
x_cgo_init(G *g, void (*setg)(void*), void **tlsbase)
{
setg_gcc = setg;
_cgo_set_stacklo(g, NULL);
}
void
_cgo_sys_thread_start(ThreadStart *ts)
{
pthread_attr_t attr;
sigset_t ign, oset;
pthread_t p;
size_t size;
int err;
sigfillset(&ign);
pthread_sigmask(SIG_SETMASK, &ign, &oset);
pthread_attr_init(&attr);
pthread_attr_getstacksize(&attr, &size);
// Leave stacklo=0 and set stackhi=size; mstart will do the rest.
ts->g->stackhi = size;
err = _cgo_try_pthread_create(&p, &attr, threadentry, ts);
pthread_sigmask(SIG_SETMASK, &oset, nil);
if (err != 0) {
fatalf("pthread_create failed: %s", strerror(err));
}
}
extern void crosscall_ppc64(void (*fn)(void), void *g);
static void*
threadentry(void *v)
{
ThreadStart ts;
ts = *(ThreadStart*)v;
_cgo_tsan_acquire();
free(v);
_cgo_tsan_release();
// Save g for this thread in C TLS
setg_gcc((void*)ts.g);
crosscall_ppc64(ts.fn, (void*)ts.g);
return nil;
}

View File

@@ -0,0 +1,82 @@
// Copyright 2020 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
.file "gcc_riscv64.S"
/*
* void crosscall1(void (*fn)(void), void (*setg_gcc)(void *g), void *g)
*
* Calling into the gc tool chain, where all registers are caller save.
* Called from standard RISCV ELF psABI, where x8-x9, x18-x27, f8-f9 and
* f18-f27 are callee-save, so they must be saved explicitly, along with
* x1 (LR).
*/
.globl crosscall1
crosscall1:
sd x1, -200(sp)
addi sp, sp, -200
sd x8, 8(sp)
sd x9, 16(sp)
sd x18, 24(sp)
sd x19, 32(sp)
sd x20, 40(sp)
sd x21, 48(sp)
sd x22, 56(sp)
sd x23, 64(sp)
sd x24, 72(sp)
sd x25, 80(sp)
sd x26, 88(sp)
sd x27, 96(sp)
fsd f8, 104(sp)
fsd f9, 112(sp)
fsd f18, 120(sp)
fsd f19, 128(sp)
fsd f20, 136(sp)
fsd f21, 144(sp)
fsd f22, 152(sp)
fsd f23, 160(sp)
fsd f24, 168(sp)
fsd f25, 176(sp)
fsd f26, 184(sp)
fsd f27, 192(sp)
// a0 = *fn, a1 = *setg_gcc, a2 = *g
mv s1, a0
mv s0, a1
mv a0, a2
jalr ra, s0 // call setg_gcc (clobbers x30 aka g)
jalr ra, s1 // call fn
ld x1, 0(sp)
ld x8, 8(sp)
ld x9, 16(sp)
ld x18, 24(sp)
ld x19, 32(sp)
ld x20, 40(sp)
ld x21, 48(sp)
ld x22, 56(sp)
ld x23, 64(sp)
ld x24, 72(sp)
ld x25, 80(sp)
ld x26, 88(sp)
ld x27, 96(sp)
fld f8, 104(sp)
fld f9, 112(sp)
fld f18, 120(sp)
fld f19, 128(sp)
fld f20, 136(sp)
fld f21, 144(sp)
fld f22, 152(sp)
fld f23, 160(sp)
fld f24, 168(sp)
fld f25, 176(sp)
fld f26, 184(sp)
fld f27, 192(sp)
addi sp, sp, 200
jr ra
#ifdef __ELF__
.section .note.GNU-stack,"",%progbits
#endif

View File

@@ -0,0 +1,58 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
.file "gcc_s390x.S"
/*
* void crosscall_s390x(void (*fn)(void), void *g)
*
* Calling into the go tool chain, where all registers are caller save.
* Called from standard s390x C ABI, where r6-r13, r15, and f8-f15 are
* callee-save, so they must be saved explicitly.
*/
.globl crosscall_s390x
crosscall_s390x:
/* save r6-r15 in the register save area of the calling function */
stmg %r6, %r15, 48(%r15)
/* allocate 64 bytes of stack space to save f8-f15 */
lay %r15, -64(%r15)
/* save callee-saved floating point registers */
std %f8, 0(%r15)
std %f9, 8(%r15)
std %f10, 16(%r15)
std %f11, 24(%r15)
std %f12, 32(%r15)
std %f13, 40(%r15)
std %f14, 48(%r15)
std %f15, 56(%r15)
/* restore g pointer */
lgr %r13, %r3
/* call fn */
basr %r14, %r2
/* restore floating point registers */
ld %f8, 0(%r15)
ld %f9, 8(%r15)
ld %f10, 16(%r15)
ld %f11, 24(%r15)
ld %f12, 32(%r15)
ld %f13, 40(%r15)
ld %f14, 48(%r15)
ld %f15, 56(%r15)
/* de-allocate stack frame */
la %r15, 64(%r15)
/* restore general purpose registers */
lmg %r6, %r15, 48(%r15)
br %r14 /* restored by lmg */
#ifdef __ELF__
.section .note.GNU-stack,"",%progbits
#endif

View File

@@ -0,0 +1,27 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build unix
#include "libcgo.h"
#include <stdlib.h>
/* Stub for calling setenv */
void
x_cgo_setenv(char **arg)
{
_cgo_tsan_acquire();
setenv(arg[0], arg[1], 1);
_cgo_tsan_release();
}
/* Stub for calling unsetenv */
void
x_cgo_unsetenv(char **arg)
{
_cgo_tsan_acquire();
unsetenv(arg[0]);
_cgo_tsan_release();
}

View File

@@ -0,0 +1,82 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build linux && (amd64 || arm64 || ppc64le)
#include <errno.h>
#include <stddef.h>
#include <stdint.h>
#include <string.h>
#include <signal.h>
#include "libcgo.h"
// go_sigaction_t is a C version of the sigactiont struct from
// defs_linux_amd64.go. This definition — and its conversion to and from struct
// sigaction — are specific to linux/amd64.
typedef struct {
uintptr_t handler;
uint64_t flags;
uintptr_t restorer;
uint64_t mask;
} go_sigaction_t;
// SA_RESTORER is part of the kernel interface.
// This is Linux i386/amd64 specific.
#ifndef SA_RESTORER
#define SA_RESTORER 0x4000000
#endif
int32_t
x_cgo_sigaction(intptr_t signum, const go_sigaction_t *goact, go_sigaction_t *oldgoact) {
int32_t ret;
struct sigaction act;
struct sigaction oldact;
size_t i;
_cgo_tsan_acquire();
memset(&act, 0, sizeof act);
memset(&oldact, 0, sizeof oldact);
if (goact) {
if (goact->flags & SA_SIGINFO) {
act.sa_sigaction = (void(*)(int, siginfo_t*, void*))(goact->handler);
} else {
act.sa_handler = (void(*)(int))(goact->handler);
}
sigemptyset(&act.sa_mask);
for (i = 0; i < 8 * sizeof(goact->mask); i++) {
if (goact->mask & ((uint64_t)(1)<<i)) {
sigaddset(&act.sa_mask, (int)(i+1));
}
}
act.sa_flags = (int)(goact->flags & ~(uint64_t)SA_RESTORER);
}
ret = sigaction((int)signum, goact ? &act : NULL, oldgoact ? &oldact : NULL);
if (ret == -1) {
// runtime.rt_sigaction expects _cgo_sigaction to return errno on error.
_cgo_tsan_release();
return errno;
}
if (oldgoact) {
if (oldact.sa_flags & SA_SIGINFO) {
oldgoact->handler = (uintptr_t)(oldact.sa_sigaction);
} else {
oldgoact->handler = (uintptr_t)(oldact.sa_handler);
}
oldgoact->mask = 0;
for (i = 0; i < 8 * sizeof(oldgoact->mask); i++) {
if (sigismember(&oldact.sa_mask, (int)(i+1)) == 1) {
oldgoact->mask |= (uint64_t)(1)<<i;
}
}
oldgoact->flags = (uint64_t)oldact.sa_flags;
}
_cgo_tsan_release();
return ret;
}

View File

@@ -0,0 +1,11 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build lldb
// Used by gcc_signal_darwin_arm64.c when doing the test build during cgo.
// We hope that for real binaries the definition provided by Go will take precedence
// and the linker will drop this .o file altogether, which is why this definition
// is all by itself in its own file.
void __attribute__((weak)) xx_cgo_panicmem(void) {}

View File

@@ -0,0 +1,213 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Emulation of the Unix signal SIGSEGV.
//
// On iOS, Go tests and apps under development are run by lldb.
// The debugger uses a task-level exception handler to intercept signals.
// Despite having a 'handle' mechanism like gdb, lldb will not allow a
// SIGSEGV to pass to the running program. For Go, this means we cannot
// generate a panic, which cannot be recovered, and so tests fail.
//
// We work around this by registering a thread-level mach exception handler
// and intercepting EXC_BAD_ACCESS. The kernel offers thread handlers a
// chance to resolve exceptions before the task handler, so we can generate
// the panic and avoid lldb's SIGSEGV handler.
//
// The dist tool enables this by build flag when testing.
//go:build lldb
#include <limits.h>
#include <pthread.h>
#include <stdio.h>
#include <signal.h>
#include <stdlib.h>
#include <unistd.h>
#include <mach/arm/thread_status.h>
#include <mach/exception_types.h>
#include <mach/mach.h>
#include <mach/mach_init.h>
#include <mach/mach_port.h>
#include <mach/thread_act.h>
#include <mach/thread_status.h>
#include "libcgo.h"
#include "libcgo_unix.h"
void xx_cgo_panicmem(void);
uintptr_t x_cgo_panicmem = (uintptr_t)xx_cgo_panicmem;
static pthread_mutex_t mach_exception_handler_port_set_mu;
static mach_port_t mach_exception_handler_port_set = MACH_PORT_NULL;
kern_return_t
catch_exception_raise(
mach_port_t exception_port,
mach_port_t thread,
mach_port_t task,
exception_type_t exception,
exception_data_t code_vector,
mach_msg_type_number_t code_count)
{
kern_return_t ret;
arm_unified_thread_state_t thread_state;
mach_msg_type_number_t state_count = ARM_UNIFIED_THREAD_STATE_COUNT;
// Returning KERN_SUCCESS intercepts the exception.
//
// Returning KERN_FAILURE lets the exception fall through to the
// next handler, which is the standard signal emulation code
// registered on the task port.
if (exception != EXC_BAD_ACCESS) {
return KERN_FAILURE;
}
ret = thread_get_state(thread, ARM_UNIFIED_THREAD_STATE, (thread_state_t)&thread_state, &state_count);
if (ret) {
fprintf(stderr, "runtime/cgo: thread_get_state failed: %d\n", ret);
abort();
}
// Bounce call to sigpanic through asm that makes it look like
// we call sigpanic directly from the faulting code.
#ifdef __arm64__
thread_state.ts_64.__x[1] = thread_state.ts_64.__lr;
thread_state.ts_64.__x[2] = thread_state.ts_64.__pc;
thread_state.ts_64.__pc = x_cgo_panicmem;
#else
thread_state.ts_32.__r[1] = thread_state.ts_32.__lr;
thread_state.ts_32.__r[2] = thread_state.ts_32.__pc;
thread_state.ts_32.__pc = x_cgo_panicmem;
#endif
if (0) {
// Useful debugging logic when panicmem is broken.
//
// Sends the first SIGSEGV and lets lldb catch the
// second one, avoiding a loop that locks up iOS
// devices requiring a hard reboot.
fprintf(stderr, "runtime/cgo: caught exc_bad_access\n");
fprintf(stderr, "__lr = %llx\n", thread_state.ts_64.__lr);
fprintf(stderr, "__pc = %llx\n", thread_state.ts_64.__pc);
static int pass1 = 0;
if (pass1) {
return KERN_FAILURE;
}
pass1 = 1;
}
ret = thread_set_state(thread, ARM_UNIFIED_THREAD_STATE, (thread_state_t)&thread_state, state_count);
if (ret) {
fprintf(stderr, "runtime/cgo: thread_set_state failed: %d\n", ret);
abort();
}
return KERN_SUCCESS;
}
void
darwin_arm_init_thread_exception_port()
{
// Called by each new OS thread to bind its EXC_BAD_ACCESS exception
// to mach_exception_handler_port_set.
int ret;
mach_port_t port = MACH_PORT_NULL;
ret = mach_port_allocate(mach_task_self(), MACH_PORT_RIGHT_RECEIVE, &port);
if (ret) {
fprintf(stderr, "runtime/cgo: mach_port_allocate failed: %d\n", ret);
abort();
}
ret = mach_port_insert_right(
mach_task_self(),
port,
port,
MACH_MSG_TYPE_MAKE_SEND);
if (ret) {
fprintf(stderr, "runtime/cgo: mach_port_insert_right failed: %d\n", ret);
abort();
}
ret = thread_set_exception_ports(
mach_thread_self(),
EXC_MASK_BAD_ACCESS,
port,
EXCEPTION_DEFAULT,
THREAD_STATE_NONE);
if (ret) {
fprintf(stderr, "runtime/cgo: thread_set_exception_ports failed: %d\n", ret);
abort();
}
ret = pthread_mutex_lock(&mach_exception_handler_port_set_mu);
if (ret) {
fprintf(stderr, "runtime/cgo: pthread_mutex_lock failed: %d\n", ret);
abort();
}
ret = mach_port_move_member(
mach_task_self(),
port,
mach_exception_handler_port_set);
if (ret) {
fprintf(stderr, "runtime/cgo: mach_port_move_member failed: %d\n", ret);
abort();
}
ret = pthread_mutex_unlock(&mach_exception_handler_port_set_mu);
if (ret) {
fprintf(stderr, "runtime/cgo: pthread_mutex_unlock failed: %d\n", ret);
abort();
}
}
static void*
mach_exception_handler(void *port)
{
// Calls catch_exception_raise.
extern boolean_t exc_server();
mach_msg_server(exc_server, 2048, (mach_port_t)(uintptr_t)port, 0);
abort(); // never returns
}
void
darwin_arm_init_mach_exception_handler()
{
pthread_mutex_init(&mach_exception_handler_port_set_mu, NULL);
// Called once per process to initialize a mach port server, listening
// for EXC_BAD_ACCESS thread exceptions.
int ret;
pthread_t thr = NULL;
pthread_attr_t attr;
sigset_t ign, oset;
ret = mach_port_allocate(
mach_task_self(),
MACH_PORT_RIGHT_PORT_SET,
&mach_exception_handler_port_set);
if (ret) {
fprintf(stderr, "runtime/cgo: mach_port_allocate failed for port_set: %d\n", ret);
abort();
}
// Block all signals to the exception handler thread
sigfillset(&ign);
pthread_sigmask(SIG_SETMASK, &ign, &oset);
// Start a thread to handle exceptions.
uintptr_t port_set = (uintptr_t)mach_exception_handler_port_set;
pthread_attr_init(&attr);
pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED);
ret = _cgo_try_pthread_create(&thr, &attr, mach_exception_handler, (void*)port_set);
pthread_sigmask(SIG_SETMASK, &oset, nil);
if (ret) {
fprintf(stderr, "runtime/cgo: pthread_create failed: %d\n", ret);
abort();
}
pthread_attr_destroy(&attr);
}

View File

@@ -0,0 +1,10 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build !lldb && ios && arm64
#include <stdint.h>
void darwin_arm_init_thread_exception_port() {}
void darwin_arm_init_mach_exception_handler() {}

View File

@@ -0,0 +1,83 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include <pthread.h>
#include <string.h>
#include <signal.h>
#include <ucontext.h>
#include "libcgo.h"
#include "libcgo_unix.h"
static void* threadentry(void*);
static void (*setg_gcc)(void*);
void
x_cgo_init(G *g, void (*setg)(void*))
{
ucontext_t ctx;
setg_gcc = setg;
if (getcontext(&ctx) != 0)
perror("runtime/cgo: getcontext failed");
g->stacklo = (uintptr_t)ctx.uc_stack.ss_sp;
// Solaris processes report a tiny stack when run with "ulimit -s unlimited".
// Correct that as best we can: assume it's at least 1 MB.
// See golang.org/issue/12210.
if(ctx.uc_stack.ss_size < 1024*1024)
g->stacklo -= 1024*1024 - ctx.uc_stack.ss_size;
// Sanity check the results now, rather than getting a
// morestack on g0 crash.
if (g->stacklo >= g->stackhi) {
fatalf("bad stack bounds: lo=%p hi=%p", (void*)(g->stacklo), (void*)(g->stackhi));
}
}
void
_cgo_sys_thread_start(ThreadStart *ts)
{
pthread_attr_t attr;
sigset_t ign, oset;
pthread_t p;
void *base;
size_t size;
int err;
sigfillset(&ign);
pthread_sigmask(SIG_SETMASK, &ign, &oset);
pthread_attr_init(&attr);
if (pthread_attr_getstack(&attr, &base, &size) != 0)
perror("runtime/cgo: pthread_attr_getstack failed");
if (size == 0) {
ts->g->stackhi = 2 << 20;
if (pthread_attr_setstack(&attr, NULL, ts->g->stackhi) != 0)
perror("runtime/cgo: pthread_attr_setstack failed");
} else {
ts->g->stackhi = size;
}
pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED);
err = _cgo_try_pthread_create(&p, &attr, threadentry, ts);
pthread_sigmask(SIG_SETMASK, &oset, nil);
if (err != 0) {
fatalf("pthread_create failed: %s", strerror(err));
}
}
extern void crosscall1(void (*fn)(void), void (*setg_gcc)(void*), void *g);
static void*
threadentry(void *v)
{
ThreadStart ts;
ts = *(ThreadStart*)v;
free(v);
crosscall1(ts.fn, setg_gcc, (void*)ts.g);
return nil;
}

Some files were not shown because too many files have changed in this diff Show More