This commit is contained in:
2025-01-05 16:11:31 -06:00
parent 4914af8a45
commit d94f9eab59
9 changed files with 622 additions and 314 deletions

View File

@@ -1,7 +1,15 @@
# TERMINOLOGY
Texture - a set of bytes on the GPU, not directly accessible
Surface - a set of bytes in RAM, modifiable
rect - a rectangle of {x,y,width,height}
Image - combination of a texture and rect, where the rect defines the UV coordinates on the texture to draw
# Drawing, cameras, viewports, logical size, and so on
A camera is a view into the game world. A camera can be "rendered", which means it renders the world, and what it can see in the world. A camera may draw to a surface, or to the main window. Objects in the world will render so that if their position is equal to the camera position, that is in the center of the screen. HUD functions always render so [0,0] is the bottom left of the camera's view.
Cameras always draw to their own render target. Then, they draw that render target to the framebuffer.
# COORDINATES
Screen coordinates start in the upper left corner at [0,0] and extend to the bottom right, in pixels. Raw mouse coordinates are in these.

View File

@@ -1 +1,3 @@
Prosperon is built in a code-first fashion.
Prosperon is built in a code-first fashion. Games in it are written with javascript, mostly up to, but not including, ES6. The nicest way to use it is to heavily abuse inherited properties and closures.
It provides a very high level way of rendering, which it translates to a variety of backends.

0
docs/inspirations.md Normal file
View File

186
docs/ops.md Normal file
View File

@@ -0,0 +1,186 @@
# RENDERING PIPELINE
The basic flow for developing graphics here:
1) develop a render graph
2) decide what to draw
The render graph is the "big idea" of how the data flows through a render; inside the execution, you utilize "what to draw".
Prosperon provides you with functions to facilitate the creation of rendering pipelines. For example, you could use "shadow_vol" function to create buffer geometry with shadow volume data.
Unity has a "graphics.rendermesh" function that you can call, and that unity automatically calls for renderer components. It is the same here. But there are a handful of other types to draw, particularly for 2d.
## 2D
### Anatomy of a 2d renderer
Traditionally, 2d rendering is a mix of tilemaps and sprites. Today, it is still more cost effective to render tilemaps, but we have a lot more flexibility.
NES
Nes had 1 tilemap and up to 8 sprites per scanline.
SNES
Up to 4 tilemap backgrounds, with priority, and flipping capability. 32 sprites per scanline, and by setting the priority correctly, they could appear behind background layers.
GB
One background layer, 10 sprites per scanline/40 per frame.
GBA
Up to 4 layers, sprites with affine transforms!
DS
Up to 4 layers; many sprites; and a 3d layer!
Sega saturn
This and everything else with generic vertex processing could do as many background layers and sprites as desired. This is what you get with prosperon on most modern computers. For more limited hardware, your options become limited too!
### Prosperon rendering
Layers
Every drawable 2d thing has a layer. This is an integer that goes from -9223372036854775808 to 9223372036854775808.
!!! On hardware that supports only a limited number of layers, this value must go from 0 to (layer #).
Layer sort
Within a layer, objects are sorted based on a given criteria. By default, this is nothing, and the engine may reorder the draws to optimize for performance. Instead, you can choose to sort by their y axis position, for example.
Parallax
Layers can have a defined parallax value, set at the engine level. Anything on that layer will move with the provided parallax. Each layer has an implicit parallax value of "1", which means it moves "as expected". Below 1 makes it move slower (0 makes it not move at all), 2 makes it move twice as fast, etc.
Tilemaps
These are highly efficient and work just like tilemaps on old consoles. When you submit one of these to draw, Prosperon can efficientally cull what can't be seen by the camera. You can have massive levels with these without any concern for performance. A tilemap is all on its own layer.
Tiles can be flipped; and the entire tilemap can have an affine transformation applied to it.
Sprites each have their own layer and affine transform. Tilemaps are just like a large sprite.
In addition to all of this, objects can have a "draw" event, wherein you can issue direct drawing commands like "render.sprite", "render.text", "render.circle", and so on. This can be useful for special effects, like multi draw passes (set stencil -> draw -> revert stencil). In this case, it is the draw event itself with the layer setting.
## 3D
3d models are like 3d sprites. Add them to the world, and then the engine handles drawing them. If you want special effects, its "draw" command can be overridden.
As sprites and 3d models are sent to render, they are added to a list; sorted; and then finally rendered.
## THE RENDERER
## Fully scriptable
The render layer is where you do larger scale organizing. For example, for a single outline, you might have an object's draw method be the standard:
- draw the model, setting stencil
- draw a scaled up model with a single color
But, since each object doing this won't merge their outlines, you need a larger order solution, wherein you draw *all* models that will be outlined, and then draw *all* scaled up models with a single color. The render graph is how you could do that. The render graph calls draw and render functions; so with a tag system, you can essentially choose to draw whatever you want. You can add new shadow passes; whatever. Of course, prosperon is packed with some standard render graphs to utilize right away.
Each graphical drawing command has a specific pipeline. A pipeline is a static object that defines every rendering detail of a drawing command.
A drawing command is composed of:
- a model
- a material
- a pipeline
The engine handles sorting these and rendering them effectively. There exist helper functions, like "render.image" which will in turn create a material and use the correct model.
You execute a list of drawing commands onto a render target. This might be the computer screen; it might be an offscreen target.
The material's properties are copied into the shader on a given pipeline; they also can have extra properties like "castshadows", "getshadows", and so on.
An *image* is a struct {
texture: GPU texture
rect: UV coordinates
}
## 2D drawing commands
The 2d drawing commands ultimately interface with a VERY limited subset of backend knowledge, and so are easily adaptable for a wide variety of hardware and screen APIs.
The basic 2D drawing techniques are:
Sprite - arbitrarily blit a bitmap to the screen with a given affine transformation and color
Tiles - Uniform squares in a grid pattern, drawn all on a single layer
Text - Generates whatever is needed to display text wrapped in a particular way at a particular coordinate
Particles - a higher order construction
Geometry - programmer called for circles or any other arbitrary shape. Might be slow!
## Effects
An "effect" is essentially a sequence of render commands. Typically, a sprite draws itself to a screen. It may have a unique pipeline for a special effect. But it might also have an "effect", which is actually a sequence of draw instructions. An example might be an outline scenario, where the sprite draws a black version of it scaled 1.1x, and then draws with the typical pipeline afterwards.
## A frame
During a frame, the engine finds everything that needs rendered. This includes enabled models, enabled sprites, tilemaps, etc. This also includes programmer directions inside of the draw() and hud() functions.
This high level commands are culled down, accounting for off screen sprites, etc, into a more compact command queue. This command queue is then rendered in whichever way the backend sees fit. Each "command queue" maps roughly into a "render pass" in vulkan. Once you submit a command queue, the data is sorted, required data is uploaded, and a render pass draws it to the specified frame.
A command is kicked off with a "batch" command.
var batch = render.batch(target, clearcolor) // target is the target buffer to draw onto
target must be known when the batch starts because it must ensure the pipelines fed into it are compatible. If clearcolor is undefined, it does not erase what is present on the target before drawing. To disable depth, simply do not include a depth attachment in the target.
batch.draw(mesh, material, pipeline)
This is the most fundamental draw command you can do. In modern parlance, the pipeline sets up the GPU completely for rendering (stencil, blend, shaders, etc); the material plugs data into the pipeline, via reflection; the mesh determines the geometry that is drawn. A mesh defines everything that's needed to kick of a draw call, including if the buffers are indexed or not, the number of indices to draw, and the first index to draw from.
batch.viewport()
batch.sprite
batch.text // a text object. faster than doing each letter as a sprite, but less flexible
// etc
batch.render(camera)
Batches can be saved to be executed again and again. So, one set of batches can be created, and then drawn from many cameras' perspectives. batch.render must take a camera
Behind the scenes, a batch tries to merge geometry, and does reordering for minimum pipeline changes behind the scenes.
Each render command can use its own unique pipeline, which entails its own shader, stencil buffer setup, everything. It is extremely flexible. Sprites can have their own pipeline.
ULTIMATELY:::
This is a much more functional style than what is typically presented from graphics APIs. Behind the scenes these are all translated to OpenGL or whatever; being functional at this level helps to optimize.
IMPORTANT NOTE:
Optimization only happens at the object level. If you have two pipelines with the exact same characteristics, they will not be batched. Use the exact same pipeline object to batch.
## SCENARIOS
BLOOM BULLETS
You want to draw a background; some ships; and some bullets that have glow to them. This amounts to two ideas:
1) draw the background and ships
2) draw bullets to a texture
3) apply bloom on the bullet
4) draw bullets+bloom over the background and ships
Steps 1, and 2-3, can be done in parallel. They constitute their own command queues. When both are done, the composite can then happen.
var bg_batch = render.batch(surf1, camera);
bg_batch.draw(background)
bg_batch.draw(ships)
bg_batch.end()
var bullet_batch = render.batch(surf2, camera);
bullet_batch.draw(bullets)
bullet_batch.end()
var bloom = render.batch(surf3, postcam)
bloom.draw(bullet_batch.color, bloom_pipeline)
bloom.end()
var final = render.batch(swapchain)
final.draw(bg_batch.color)
final.draw(bloom.color)
final.end()
When 'batch.end' is called, it reorders as needed, uploads data, and then does a render pass.
3D GAME WITH DIRECTIONAL LIGHT SHADOW MAP
var shadow_batch = render.batch(shadow_surf, dir_T)
shadow_batch.draw(scene, depth_mat) // scene returns a list of non culled 3d obejcts; we force it to use depth_mat
shadow_batch.end()
base_mat.shadowmap = shadow_batch.color;
var main_batch = render.batch(swapchain, camera)
main_batch.draw(scene)
main_batch.end()
FIERY LETTERS
This pseudo code draws a "hello world" cutout, with fire behind it, and then draws the game's sprites over that
var main = render.batch(swapchain, 2dcam)
main.draw("hello world", undefined, stencil_pipeline)
main.draw(fire)
main.draw(fullscreen, undefined, stencil_reset)
main.draw(game)
main.end()

View File

@@ -1,7 +1,5 @@
var unit_transform = os.make_transform();
var sprite_mesh = {};
render.doc = {
doc: "Functions for rendering modes.",
normal: "Final render with all lighting.",
@@ -12,17 +10,24 @@ var cur = {};
cur.images = [];
cur.samplers = [];
function bind_pipeline(pass, pipeline)
{
make_pipeline(pipeline)
pass.bind_pipeline(pipeline)
pass.pipeline = pipeline;
}
var main_pass;
var base_pipeline = {
vertex: "sprite.vert",
frag: "sprite.frag",
fragment: "sprite.frag",
primitive: "triangle", // point, line, linestrip, triangle, trianglestrip
fill: true, // false for lines
depth: {
compare: "greater_equal", // never/less/equal/less_equal/greater/not_equal/greater_equal/always
test: true,
write: true,
test: false,
write: false,
bias: 0,
bias_slope_scale: 0,
bias_clamp: 0
@@ -62,23 +67,114 @@ var base_pipeline = {
mask: 0xFFFFFFFF,
domask: false
},
label: "scripted pipeline"
label: "scripted pipeline",
target: "main"
}
var cornflower = [62/255,96/255,113/255,1];
var sprite_pipeline = Object.create(base_pipeline);
var post_pipeline = Object.create(base_pipeline);
post_pipeline.stencil = {
enabled: false,
test: false
test: false,
};
post_pipeline.depth = {
test: false,
write: false
};
post_pipeline.target = "post";
//post_pipeline.vertex = "post.vert"
//post_pipeline.fragment = "post.frag"
var dbgline_pipeline = Object.create(base_pipeline);
dbgline_pipeline.vertex = "dbgline.vert.hlsl"
dbgline_pipeline.fragment = "dbgline.frag.hlsl"
dbgline_pipeline.primitive = "line"
var post_camera = {};
post_camera.transform = os.make_transform();
post_camera.transform.unit();
post_camera.zoom = 1;
// post_camera.
post_camera.size = [640,360];
post_camera.mode = 'keep';
post_camera.viewport = {x:0,y:0,width:1,height:1}
post_camera.fov = 45;
post_camera.type = 'ortho';
post_camera.aspect = 16/9;
function get_pipeline_ubo_slot(pipeline, name)
{
if (!pipeline.vertex.reflection.ubos) return;
for (var i = 0; i < pipeline.vertex.reflection.ubos.length; i++) {
var ubo = pipeline.vertex.reflection.ubos[i];
if (ubo.name.endsWith(name))
return i;
}
return undefined;
}
function transpose4x4(val) {
var out = [];
out[0] = val[0]; out[1] = val[4]; out[2] = val[8]; out[3] = val[12];
out[4] = val[1]; out[5] = val[5]; out[6] = val[9]; out[7] = val[13];
out[8] = val[2]; out[9] = val[6]; out[10] = val[10];out[11] = val[14];
out[12] = val[3];out[13] = val[7];out[14] = val[11];out[15] = val[15];
return out;
}
function ubo_obj_to_array(pipeline, name, obj)
{
var ubo;
for (var i = 0; i < pipeline.vertex.reflection.ubos.length; i++) {
ubo = pipeline.vertex.reflection.ubos[i];
if (ubo.name.endsWith(name)) break;
}
var type = pipeline.vertex.reflection.types[ubo.type];
var len = 0;
for (var mem of type.members)
len += type_to_byte_count(mem.type);
var buf = new ArrayBuffer(len);
var view = new DataView(buf);
for (var mem of type.members) {
var val = obj[mem.name];
if (!val) throw new Error (`Could not find ${mem.name} on supplied object`);
if (mem.name === 'model')
val = transpose4x4(val.array());
for (var i = 0; i < val.length; i++)
view.setFloat32(mem.offset + i*4, val[i],true);
}
return buf;
}
function type_to_byte_count(type)
{
switch (type) {
case 'float':
return 4;
case 'vec2':
return 8;
case 'vec3':
return 12;
case 'vec4':
return 16;
case 'mat4':
return 64;
// Add cases as needed
default:
throw new Error("Unknown or unsupported float-based type: " + type);
}
}
var sprite_model_ubo = {
model: unit_transform,
color: [1,1,1,1]
};
render.poly_prim = function poly_prim(verts) {
var index = [];
@@ -103,32 +199,23 @@ render.poly_prim = function poly_prim(verts) {
var shader_cache = {};
var shader_times = {};
function strip_shader_inputs(shader) {
for (var a of shader.vs.inputs) a.name = a.name.slice(2);
}
render.hotreload = function shader_hotreload() {
for (var i in shader_times) {
if (io.mod(i) <= shader_times[i]) continue;
say(`HOT RELOADING SHADER ${i}`);
shader_times[i] = io.mod(i);
var obj = create_shader_obj(i);
obj = obj[os.sys()];
var old = shader_cache[i];
Object.assign(shader_cache[i], obj);
cur.bind = undefined;
cur.mesh = undefined;
}
render.hotreload = function shader_hotreload(file) {
console.warn('reimplement shader hot reloading for ' + file)
};
function make_pipeline(pipeline) {
if (pipeline.gpu) return; // this pipeline has already been made
if (typeof pipeline.vertex === 'string')
pipeline.vertex = make_shader(pipeline.vertex);
if (typeof pipeline.fragment === 'string')
pipeline.fragment = make_shader(pipeline.fragment)
// 1) Reflection data for vertex shader
var refl = pipeline.vertex.reflection
if (!refl || !refl.inputs || !Array.isArray(refl.inputs)) {
// If there's no reflection data, just pass pipeline along
// or throw an error if reflection is mandatory
render._main.make_pipeline(pipeline)
return
pipeline.gpu = render._main.make_pipeline(pipeline);
return;
}
var inputs = refl.inputs
@@ -185,8 +272,7 @@ function make_pipeline(pipeline) {
pipeline.vertex_attributes = attributes
// 4) Hand off the pipeline to native code
console.log(`depth: ${json.encode(pipeline.depth)}`);
return render._main.make_pipeline(pipeline)
pipeline.gpu = render._main.make_pipeline(pipeline);
}
function make_shader(sh_file) {
@@ -243,13 +329,13 @@ render.device = {
gamegear: [160, 144, 3.2],
};
var sprite_stack = [];
var render_queue = [];
render.device.doc = `Device resolutions given as [x,y,inches diagonal].`;
var std_sampler;
var tbuffer;
var spritemesh;
function upload_model(model)
{
var bufs = [];
@@ -261,106 +347,144 @@ function upload_model(model)
tbuffer = render._main.upload(this, bufs, tbuffer);
}
function bind_model(model)
function bind_model(pass,pipeline,model)
{
var pipeline = this.pipeline;
var buffers = pipeline.vertex_buffer_descriptions;
var bufs = [];
if (buffers)
for (var b of buffers) {
if (b.name in model)
bufs.push(model[b.name])
else
throw Error (`could not find buffer ${b.name} on model`);
}
this.bind_buffers(0,bufs);
this.bind_index_buffer(model.indices);
pass.bind_buffers(0,bufs);
pass.bind_index_buffer(model.indices);
}
function bind_mat(mat)
function bind_mat(pass, pipeline, mat)
{
var pipeline = this.pipeline;
var imgs = [];
var refl = pipeline.fragment.reflection;
if (refl.separate_images) {
for (var i of pipeline.fragment.reflection.separate_images) {
for (var i of refl.separate_images) {
if (i.name in mat) {
var tex = mat[i.name];
imgs.push({texture:tex.texture, sampler:tex.sampler});
} else
throw Error (`could not find all necessary images: ${i.name}`)
}
this.bind_samplers(false, 0,imgs);
pass.bind_samplers(false, 0,imgs);
}
}
function group_sprites_by_texture(sprites)
{
if (sprites.length === 0) return;
var groups = [];
var lasttex = sprites[0].image;
var group = [];
var first = 0;
for (var i = 0; i < sprites.length; i++) {
if (lasttex !== sprites[i].image) {
groups.push({image:lasttex, num_indices:(i-first)*6, first_index:first*6});
lasttex = sprites[i].image;
first = i;
group = [];
}
group.push(sprites[i])
}
groups.push({
image:lasttex,num_indices:(sprites.length-first)*6,first_index:first*6
})
return groups;
}
function render_camera(camera)
{
if (render_queue.length == 0) return;
camera.target ??= render._main.mainRT(prosperon.camera.size);
var cmds = render._main.acquire_cmd_buffer();
var myimg = game.texture("pockle");
myimg.sampler = std_sampler;
spritemesh = render._main.make_sprite_mesh(sprite_stack, spritemesh);
var spritemesh = render._main.make_sprite_mesh(render_queue);
cmds.upload_model(spritemesh);
sprite_stack.length = 0;
var pass = cmds.render_pass(camera.target);
if (!pass.__proto__.bind_model) {
pass.__proto__.bind_model = bind_model;
pass.__proto__.bind_mat = bind_mat;
var pass = cmds.render_pass(camera.target, cornflower);
var camera = prosperon.camera;
var draw_cmds = group_sprites_by_texture(render_queue);
console.log(json.encode(draw_cmds))
for (var group of draw_cmds) {
var pipeline = sprite_pipeline;
var mesh = spritemesh;
var img = group.image;
img.sampler = std_sampler;
bind_pipeline(pass, pipeline);
bind_mat(pass, pipeline, {diffuse:img});
bind_model(pass,pipeline,spritemesh);
var camslot = get_pipeline_ubo_slot(pipeline, 'TransformBuffer');
if (typeof camslot !== 'undefined')
cmds.camera(camera, pass, undefined, camslot);
var modelslot = get_pipeline_ubo_slot(pipeline, "model");
if (typeof modelslot !== 'undefined') {
var ubo = ubo_obj_to_array(pipeline, 'model', sprite_model_ubo);
cmds.push_vertex_uniform_data(modelslot, ubo);
}
pass.draw_indexed(group.num_indices, 1, mesh.first_index, 0, 0);
}
pass.bind_pipeline(base_pipeline);
pass.bind_model(spritemesh);
pass.bind_mat({diffuse:myimg});
cmds.camera(prosperon.camera, pass);
pass.draw(spritemesh.count,1,0,0,0);
/* cmds.camera(prosperon.camera);
pass.bind_pipeline(pipeline_model.gpu);
bind_model(pass,ducky.mesh,pipeline_model);
bind_mat(pass,ducky.material,pipeline_model);
pass.draw(ducky.mesh.count,1,0,0,0);
*/
// cmds.camera(prosperon.camera, true);
// pass.bind_pipeline(base_pipeline.gpu);
// pass.draw(spritemesh.count,1,0,0,0);
pass.end();
cmds.submit();
render_queue.length = 0;
}
function gpupresent()
{
try{
try{
render_camera(prosperon.camera);
} catch(e) { console.error(e); } finally {
var cmds = render._main.acquire_cmd_buffer();
render.image(prosperon.camera.target.color_targets[0], {x:0,y:0,width:200,height:200});
var mmesh = render._main.make_sprite_mesh(sprite_stack);
sprite_stack.length = 0;
cmds.upload_model(mmesh);
var winsize = render._main.window.size;
var pipeline = post_pipeline;
var T = os.make_transform();
T.trs([0,0],undefined, [winsize.x, winsize.y]);
var pass = cmds.swapchain_pass();
cmds.camera(prosperon.camera, pass);
pass.bind_pipeline(base_pipeline);
pass.bind_model(mmesh);
bind_pipeline(pass,pipeline);
bind_model(pass,pipeline,quad_model);
var camslot = get_pipeline_ubo_slot(pipeline, 'TransformBuffer');
if (typeof camslot !== 'undefined') {
post_camera.size = render._main.window.size;
cmds.camera(post_camera, pass, undefined, camslot);
}
var modelslot = get_pipeline_ubo_slot(pipeline, "model");
if (typeof modelslot !== 'undefined') {
var ubo = ubo_obj_to_array(pipeline, 'model', {model:T, color:[1,1,1,1]});
cmds.push_vertex_uniform_data(modelslot, ubo);
}
var mat = {};
mat.diffuse = {
texture:prosperon.camera.target.color_targets[0].texture,
sampler:std_sampler
};
pass.bind_mat(mat);
pass.draw(mmesh.count,1,0,0,0);
bind_mat(pass, pipeline, mat);
pass.draw_indexed(quad_model.num_indices,1,quad_model.first_index,0,0);
pass.end();
cmds.submit();
}
}
var display_res;
function logical_size(size)
{
display_res = size;
}
var ducky;
var pipeline_model;
pipeline_model = Object.create(base_pipeline);
pipeline_model.vertex = "model.vert"
pipeline_model.fragment = "model.frag"
var quad_model;
render.init = function () {
@@ -375,7 +499,6 @@ render.init = function () {
quad_model = render._main.make_quad();
io.mount("core");
render._main.present = gpupresent;
render._main.logical_size = logical_size;
ducky = os.model_buffer("Duck.glb");
for (var mesh of ducky) {
var mat = mesh.material;
@@ -396,52 +519,6 @@ render.init = function () {
cmds.upload_model(ducky.mesh);
cmds.upload_model(quad_model);
cmds.submit();
var sprite_vert = make_shader("sprite.vert");
var sprite_frag = make_shader("sprite.frag");
base_pipeline.vertex = sprite_vert;
base_pipeline.fragment = sprite_frag;
base_pipeline.gpu = make_pipeline(base_pipeline);
post_pipeline.vertex = make_shader("post.vert");
post_pipeline.fragment = make_shader("post.frag");
post_pipeline.gpu = make_pipeline(post_pipeline);
var model_vert = make_shader("model.vert");
var model_frag = make_shader("model.frag");
pipeline_model = Object.create(base_pipeline);
pipeline_model.vertex = model_vert;
pipeline_model.fragment = model_frag;
pipeline_model.gpu = make_pipeline(pipeline_model);
/* os.make_circle2d().draw = function () {
render.circle(this.body().transform().pos, this.radius, [1, 1, 0, 1]);
};
var disabled = [148 / 255, 148 / 255, 148 / 255, 1];
var sleep = [1, 140 / 255, 228 / 255, 1];
var dynamic = [1, 70 / 255, 46 / 255, 1];
var kinematic = [1, 194 / 255, 64 / 255, 1];
var static_color = [73 / 255, 209 / 255, 80 / 255, 1];
os.make_poly2d().draw = function () {
var body = this.body();
var color = body.sleeping() ? [0, 0.3, 0, 0.4] : [0, 1, 0, 0.4];
var t = body.transform();
render.poly(this.points, color, body.transform());
color.a = 1;
render.line(this.points.wrapped(1), color, 1, body.transform());
};
os.make_seg2d().draw = function () {
render.line([this.a(), this.b()], [1, 0, 1, 1], Math.max(this.radius / 2, 1), this.body().transform());
};
joint.pin().draw = function () {
var a = this.bodyA();
var b = this.bodyB();
render.line([a.transform().pos.xy, b.transform().pos.xy], [0, 1, 1, 1], 1);
};
*/
};
render.draw_sprites = true;
@@ -451,38 +528,6 @@ render.draw_gui = true;
render.draw_gizmos = true;
render.sprites = function render_sprites() {
// bucket & draw
/* var sorted = allsprites.sort((a,b) => {
if (a.gameobject.drawlayer !== b.gameobject.drawlayer) return a.gameobject.drawlayer - b.gameobject.drawlayer;
if (a.image.texture !== b.image.texture) return a.image.texture.path.localeCompare(b.image.texture.path);
return a.gameobject.transform.pos.y - b.gameobject.transform.pos.y;
});
if (sorted.length === 0) return;
var tex = undefined;
var buckets = [];
var group = [sorted[0]];
for (var i = 1; i < sorted.length; i++) {
if (sorted[i].image.texture !== sorted[i-1].image.texture) {
buckets.push(group);
group = [];
}
group.push(sorted[i]);
}
if (group.length>0) buckets.push(group);
render.use_shader(spritessboshader);
for (var img of buckets) {
var sparray = img;
if (sparray.length === 0) continue;
var ss = sparray[0];
ss.baseinstance = render.make_sprite_ssbo(sparray,sprite_ssbo);
render.use_mat(ss);
render.draw(shape.quad,sprite_ssbo,sparray.length);
}
*/
var buckets = component.sprite_buckets();
if (buckets.length === 0) return;
render.use_shader(spritessboshader);
@@ -593,10 +638,21 @@ function flush_poly() {
poly_idx = 0;
}
// render.line has uv and can be texture mapped; dbg_line is hardware standard lines
render.line = function render_line(points, color = Color.white, thickness = 1, pipe = base_pipeline) {
render._main.line(points, color);
// render._main.line(points, color);
};
render.dbg_line = function(points, color = Color.white)
{
}
render.dbg_point = function(points, color = Color.white)
{
}
/* All draw in screen space */
render.point = function (pos, size, color = Color.blue) {
render._main.point(pos,color);
@@ -629,11 +685,21 @@ render.rectangle = function render_rectangle(rect, color = Color.white, pipe = b
render._main.fillrect(rect,color);
};
render.text = function text(str, rect, font = cur_font, size = 0, color = Color.white, wrap = 0, pipe = base_pipeline) {
if (typeof font === 'string')
font = render.get_font(font)
var mesh = os.make_text_buffer(str, rect, 0, color, wrap, font);
render._main.geometry(font.texture, mesh);
render.text = function text(text, rect, font = cur_font, size = 0, color = Color.white, wrap = 0, pipe = base_pipeline) {
// if (typeof font === 'string')
// font = render.get_font(font)
// var mesh = os.make_text_buffer(text, rect, 0, color, wrap, font);
// render._main.geometry(font.texture, mesh);
render_queue.push({
type: 'text',
text,
rect,
font,
size,
color,
wrap,
pipe
});
return;
if (typeof font === 'string')
@@ -726,24 +792,6 @@ var stencil_invert = {
depth_fail_op: "invert",
pass_op: "invert"
};
var stencil_inverter = Object.create(base_pipeline);
/*Object.assign(stencil_inverter, {
stencil: {
enabled: true,
front: stencil_invert,
back:stencil_invert,
write:true,
read:true,
ref: 0
},
write_mask: colormask.none
});*/
render.invertmask = function()
{
render.forceflush();
render.use_shader('screenfill.cg', stencil_inverter);
render.draw(shape.quad);
}
render.mask = function mask(image, pos, scale, rotation = 0, ref = 1)
{
@@ -808,7 +856,8 @@ render.geometry = function geometry(material, geometry)
render._main.geometry(material.diffuse.texture, geometry);
}
render.image = function image(image, rect = [0,0], rotation = 0, color = Color.white) {
// queues to be flushed later
render.image = function image(image, rect = [0,0], rotation = 0, color = Color.white, pipeline = base_pipeline) {
if (!image) throw Error ('Need an image to render.')
if (typeof image === "string")
image = game.texture(image);
@@ -817,12 +866,12 @@ render.image = function image(image, rect = [0,0], rotation = 0, color = Color.w
rect.height ??= image.texture.height;
var T = os.make_transform();
T.rect(rect);
sprite_stack.push({
render_queue.push({
transform: T,
color: color,
image:image
image:image,
pipeline: pipeline
});
// render._main.texture(image.texture, rect, image.rect, color);
};
render.images = function images(image, rects)
@@ -863,6 +912,7 @@ render.images = function images(image, rects)
// slice is given in pixels
render.slice9 = function slice9(image, rect = [0,0], slice = 0, color = Color.white) {
render.image(image,rect,undefined,color); return;
if (typeof image === 'string')
image = game.texture(image);
@@ -919,9 +969,8 @@ render.cross.doc = "Draw a cross centered at pos, with arm length size.";
render.arrow.doc = "Draw an arrow from start to end, with wings of length wingspan at angle wingangle.";
render.rectangle.doc = "Draw a rectangle, with its corners at lowerleft and upperright.";
render.draw = function render_draw(mesh, ssbo, inst = 1, e_start = 0) {
sg_bind(mesh, ssbo);
render.spdraw(e_start, cur.bind.count, inst);
render.draw = function render_draw(mesh, material, pipeline) {
};
render.viewport = function(rect)
@@ -1127,25 +1176,17 @@ var imgui_fn = function imgui_fn() {
if (Math.abs(wh.x - basesize.scale(mult-1).x) < Math.abs(wh.x - trysize.x))
mult--;
prosperon.window_render(basesize.scale(mult));
\prosperon.window_render(basesize.scale(mult));
*/
var clearcolor = [100,149,237,255].scale(1/255);
prosperon.render = function prosperon_render() {
try{
render._main.newframe(prosperon.window, clearcolor);
// render._main.draw_color(clearcolor);
// render._main.clear();
try { prosperon.camera.render(); } catch(e) { console.error(e) }
try { prosperon.app(); } catch(e) { console.error(e) }
if (debug.show) try { imgui_fn(); } catch(e) { console.error(e) }
//if (debug.show) try { imgui_fn(); } catch(e) { console.error(e) }
} catch(e) {
console.error(e)
} finally {
render._main.present();
tracy.end_frame();
}
};
@@ -1158,6 +1199,8 @@ try {
if (e.file.startsWith('.')) return;
if (e.file.endsWith('.js'))
actor.hotreload(e.file);
else if (e.file.endsWith('.hlsl'))
shader_hotreload(e.file);
else if (Resources.is_image(e.file))
game.tex_hotreload(e.file);
} catch(e) { console.error(e); }
@@ -1179,11 +1222,11 @@ try {
try { prosperon.appupdate(dt); } catch(e) { console.error(e) }
input.procdown();
try {
update_emitters(dt * game.timescale);
os.update_timers(dt * game.timescale);
prosperon.update(dt*game.timescale);
prosperon.draw();
} catch(e) { console.error(e) }
if (sim.mode === "step") sim.pause();
@@ -1199,11 +1242,13 @@ try {
*/
}
try { prosperon.draw(); } catch(e) {console.error(e)}
prosperon.render();
// tracy.gpu_zone(prosperon.render);
} catch(e) {
console.error(e)
}
tracy.end_frame();
};
return { render };

View File

@@ -15,11 +15,18 @@ struct output
float4 color : COLOR0; // Interpolated vertex color
};
cbuffer model : register(b1, space1)
{
float4x4 model;
float4 color;
};
output main(input i)
{
output o;
o.pos = mul(world_to_projection, float4(i.pos,0,1));
float4 worldpos = mul(model, float4(i.pos,0,1));
o.pos = mul(world_to_projection, worldpos);
o.uv = i.uv;
o.color = i.color;
o.color = i.color * color;
return o;
}

View File

@@ -15,7 +15,7 @@ struct output
output main(input i)
{
output o;
o.pos = float4(i.pos, 0, 1);
o.pos = mul(world_to_projection, float4(i.pos, 0, 1));
o.uv = i.uv;
return o;
}

View File

@@ -22,6 +22,12 @@ struct text_vert {
typedef struct text_vert text_vert;
struct text_char {
rect pos;
rect uv;
HMM_Vec4 color;
};
struct shader;
struct window;

View File

@@ -184,6 +184,9 @@ JSValue val = JS_GetPropertyUint32(JS,VAL,I); \
TO = js2##TYPE(JS, val); \
JS_FreeValue(JS, val); } \
static SDL_GPUGraphicsPipelineTargetInfo main_info = {0};
static SDL_GPUGraphicsPipelineTargetInfo post_info = {0};
JSValue number2js(JSContext *js, double g) { return JS_NewFloat64(js,g); }
double js2number(JSContext *js, JSValue v) {
double g;
@@ -323,7 +326,6 @@ static BufferCheckResult get_or_extend_buffer(
}
// If we reach here, we need a new buffer
res.need_new = 1;
printf("NEED NEW BUFFER\n");
return res;
}
@@ -1010,6 +1012,7 @@ JSC_CCALL(os_make_text_buffer,
JS_SetProperty(js, ret, indices_atom, jsidx);
JS_SetProperty(js, ret, vertices_atom, number2js(js, verts));
JS_SetProperty(js, ret, count_atom, number2js(js, count));
JS_SetPropertyStr(js,ret,"num_indices", number2js(js,count));
return ret;
)
@@ -2009,6 +2012,7 @@ JSC_SCALL(SDL_Window_make_gpu,
SDL_Window *win = js2SDL_Window(js,self);
SDL_GPUDevice *gpu = SDL_CreateGPUDevice(SDL_GPU_SHADERFORMAT_SPIRV | SDL_GPU_SHADERFORMAT_DXIL | SDL_GPU_SHADERFORMAT_MSL, 1, NULL);
global_gpu = gpu;
return SDL_GPUDevice2js(js,gpu);
)
@@ -2698,6 +2702,7 @@ JSC_CCALL(gpu_load_gltf_model,
// count is usually the number of indices if available, else vertex_count
JS_SetProperty(js, ret, vertices_atom, number2js(js, vertex_count));
JS_SetProperty(js, ret, count_atom, number2js(js, index_count > 0 ? index_count : vertex_count));
JS_SetPropertyStr(js,ret,"num_indices", number2js(js, index_count > 0 ? index_count : vertex_count));
// Cleanup
free(positions);
@@ -2818,12 +2823,29 @@ static const JSCFunctionListEntry js_SDL_Renderer_funcs[] = {
// GPU API
JSC_CCALL(gpu_claim_window,
SDL_GPUDevice *d = js2SDL_GPUDevice(js,self);
SDL_Window *w = js2SDL_Window(js, argv[0]);
SDL_ClaimWindowForGPUDevice(d,w);
if (!SDL_SetGPUSwapchainParameters(d,w,SDL_GPU_SWAPCHAINCOMPOSITION_SDR, SDL_GPU_PRESENTMODE_MAILBOX))
SDL_GPUDevice *gpu = js2SDL_GPUDevice(js,self);
SDL_Window *win = js2SDL_Window(js, argv[0]);
SDL_ClaimWindowForGPUDevice(gpu,win);
if (!SDL_SetGPUSwapchainParameters(gpu,win,SDL_GPU_SWAPCHAINCOMPOSITION_SDR, SDL_GPU_PRESENTMODE_IMMEDIATE))
printf("Could not set: %s\n", SDL_GetError());
// SDL_SetGPUAllowedFramesInFlight(d, 1);
// SDL_SetGPUAllowedFramesInFlight(gpu, 1);
SDL_GPUColorTargetDescription *dsc = calloc(sizeof(*dsc),1);
dsc->format = SDL_GetGPUSwapchainTextureFormat(gpu, win);
main_info =(SDL_GPUGraphicsPipelineTargetInfo) {
.num_color_targets = 1,
.color_target_descriptions = dsc,
.has_depth_stencil_target = 1,
.depth_stencil_format = RT_DEPTH,
};
post_info =(SDL_GPUGraphicsPipelineTargetInfo) {
.num_color_targets = 1,
.color_target_descriptions = dsc,
.has_depth_stencil_target = 0,
};
)
JSC_CCALL(cmd_swapchain_pass,
@@ -2841,6 +2863,7 @@ JSC_CCALL(cmd_swapchain_pass,
1,
NULL
);
if (!pass) return JS_ThrowReferenceError(js, "Unable to create swapchain pass: %s", SDL_GetError());
JSValue jspass = SDL_GPURenderPass2js(js,pass);
JS_SetPropertyStr(js,jspass,"size", vec22js(js,(HMM_Vec2){w,h}));
return jspass;
@@ -2849,7 +2872,6 @@ JSC_CCALL(cmd_swapchain_pass,
JSC_CCALL(gpu_mainRT,
SDL_GPUDevice *gpu = js2SDL_GPUDevice(js,self);
HMM_Vec2 size = js2vec2(js,argv[0]);
printf("rt of size %g,%g\n", size.x, size.y);
JSValue color_targets = JS_NewArray(js);
SDL_GPUTexture *colortex = SDL_CreateGPUTexture(gpu, &(SDL_GPUTextureCreateInfo) {
.type = SDL_GPU_TEXTURETYPE_2D,
@@ -2878,7 +2900,9 @@ JSC_CCALL(gpu_mainRT,
JS_SetPropertyStr(js,color_tar,"store_op", JS_NewString(js,"store"));
JSValue depth_tar = JS_NewObject(js);
JS_SetPropertyStr(js,depth_tar, "texture", SDL_GPUTexture2js(js,depthtex));
JSValue js_depthtex = SDL_GPUTexture2js(js,depthtex);
JS_SetPropertyStr(js,js_depthtex, "format", number2js(js,RT_DEPTH));
JS_SetPropertyStr(js,depth_tar, "texture", js_depthtex);
JS_SetPropertyStr(js,depth_tar,"mip_level", number2js(js,0));
JS_SetPropertyStr(js,depth_tar,"load_op", JS_NewString(js,"clear"));
JS_SetPropertyStr(js,depth_tar,"store_op", JS_NewString(js,"store"));
@@ -2886,6 +2910,8 @@ JSC_CCALL(gpu_mainRT,
JS_SetPropertyStr(js,depth_tar,"stencil_load_op", JS_NewString(js,"clear"));
JS_SetPropertyStr(js,depth_tar,"clear_stencil", number2js(js,0));
JS_SetPropertyStr(js,depth_tar,"clear_depth", number2js(js,1));
SDL_SetGPUTextureName(gpu, depthtex, "main pass depth");
JS_SetPropertyUint32(js,color_targets,0,color_tar);
@@ -3202,6 +3228,7 @@ int atom2blend_op(JSAtom atom)
else return SDL_GPU_BLENDOP_ADD;
}
static JSValue js_gpu_make_pipeline(JSContext *js, JSValueConst self, int argc, JSValueConst *argv) {
SDL_GPUDevice *gpu = js2SDL_GPUDevice(js,self);
if (argc < 1)
@@ -3589,17 +3616,15 @@ static JSValue js_gpu_make_pipeline(JSContext *js, JSValueConst self, int argc,
// info.blend_state.alpha_to_coverage = JS_ToBool(js, atc_val);
JS_FreeValue(js, atc_val);
JSValue jswin = JS_GetPropertyStr(js,self,"window");
SDL_Window *win = js2SDL_Window(js, jswin);
info.target_info =(SDL_GPUGraphicsPipelineTargetInfo) {
.num_color_targets = 1,
.color_target_descriptions = (SDL_GPUColorTargetDescription[]) {{
.format = SDL_GetGPUSwapchainTextureFormat(gpu, win)
}},
.has_depth_stencil_target = 1,
.depth_stencil_format = RT_DEPTH,
};
JS_FreeValue(js,jswin);
JSValue js_tar = JS_GetPropertyStr(js,pipe,"target");
const char *tar = JS_ToCString(js,js_tar);
JS_FreeValue(js,js_tar);
if (!strcmp(tar,"post"))
info.target_info = post_info;
else
info.target_info = main_info;
JS_FreeCString(js,tar);
// Create the pipeline
SDL_GPUGraphicsPipeline *pipeline = SDL_CreateGPUGraphicsPipeline(gpu, &info);
@@ -3608,12 +3633,6 @@ static JSValue js_gpu_make_pipeline(JSContext *js, JSValueConst self, int argc,
return SDL_GPUGraphicsPipeline2js(js, pipeline);
}
JSC_CCALL(gpu_newframe,
SDL_GPUDevice *gpu = js2SDL_GPUDevice(js,self);
SDL_Window *win = js2SDL_Window(js,argv[0]);
)
// Helper conversion functions:
int atom2filter(JSAtom atom)
{
@@ -3797,69 +3816,9 @@ static JSValue js_gpu_make_sampler(JSContext *js, JSValueConst self, int argc, J
}
// Convert sampler pointer to JS object or handle as needed
printf("sampler created was %p\n", sampler);
return SDL_GPUSampler2js(js, sampler); // You need to implement SDL_GPUSampler2js similar to pipeline
}
/*JSC_CCALL(gpu_present,
for (size_t i = 0; i < arrlen(batches)-1; i++) {
size_t vert_end = batches[i+1].start_vert; // one past the last vertex of this batch
size_t vert_count = vert_end - vert_start;
size_t idx_end = batches[i+1].start_indx; // one past the last index of this batch
size_t idx_count = idx_end - idx_start;
SDL_DrawGPUIndexedPrimitives(
pass,
(int)idx_count, // How many indices to draw
1, // Instance count
(int)idx_start, // Starting index offset
(int)vert_start, // Vertex offset
0 // First instance (if using instancing)
);
vert_start = vert_end;
idx_start = idx_end;
}
SDL_EndGPURenderPass(pass);
SDL_SubmitGPUCommandBuffer(cmds);
SDL_ReleaseGPUBuffer(gpu,vertex_buffer);
SDL_ReleaseGPUBuffer(gpu,index_buffer);
arrsetlen(global_verts,0);
arrsetlen(global_indices,0);
)
*/
JSC_CCALL(gpu_camera,
SDL_GPUDevice *gpu = js2SDL_GPUDevice(js,self);
SDL_Rect vp;
vp.w = 1280;
vp.h = 718;
int centered = JS_ToBool(js,argv[1]);
HMM_Mat4 proj;
if (centered) {
// World coordinates: (0,0) at screen center, Y up
proj = HMM_Orthographic_RH_NO(
-(float)vp.w * 0.5f, (float)vp.w * 0.5f, // left, right
-(float)vp.h * 0.5f, (float)vp.h * 0.5f, // bottom, top
-1.0f, 1.0f
);
} else {
// UI coordinates: (0,0) at bottom-left corner, Y up
proj = HMM_Orthographic_RH_NO(
0.0f, (float)vp.w, // left, right
0.0f, (float)vp.h, // bottom, top
-1.0f, 1.0f
);
}
transform *tra = js2transform(js, argv[0]);
HMM_Mat4 view = HMM_Translate((HMM_Vec3){-tra->pos.x, -tra->pos.y, 0.0f});
)
JSC_CCALL(gpu_scale,
)
@@ -4130,6 +4089,7 @@ JSC_CCALL(gpu_make_sprite_mesh,
JS_SetProperty(js, ret, vertices_atom, number2js(js, verts));
JS_SetProperty(js, ret, count_atom, number2js(js, count));
JS_SetPropertyStr(js,ret,"num_indices", number2js(js,count));
// Free temporary CPU arrays
free(posdata);
@@ -4159,10 +4119,10 @@ JSC_CCALL(gpu_make_quad,
posdata[2] = (HMM_Vec2){0,0};
posdata[3] = (HMM_Vec2){1,0};
uvdata[0] = (HMM_Vec2){0,1};
uvdata[1] = (HMM_Vec2){1,1};
uvdata[2] = (HMM_Vec2){0,0};
uvdata[3] = (HMM_Vec2){1,0};
uvdata[0] = (HMM_Vec2){0,0};
uvdata[1] = (HMM_Vec2){1,0};
uvdata[2] = (HMM_Vec2){0,1};
uvdata[3] = (HMM_Vec2){1,1};
colordata[0] = usecolor;
colordata[1] = usecolor;
@@ -4182,6 +4142,7 @@ JSC_CCALL(gpu_make_quad,
JS_SetProperty(js, ret, vertices_atom, number2js(js, verts));
JS_SetProperty(js, ret, count_atom, number2js(js, count));
JS_SetPropertyStr(js,ret,"num_indices", number2js(js,count));
// Free temporary CPU arrays
free(posdata);
@@ -4347,8 +4308,6 @@ JSC_CCALL(gpu_upload,
// ensure it's large enough
size_t transfer_size = js_getnum_str(js,js_transfer, "size");
if (transfer_size < total_size) {
printf("NEED A LARGER TRANSFER BUFFER\n");
// Need a new one
transfer = SDL_CreateGPUTransferBuffer( gpu, &(SDL_GPUTransferBufferCreateInfo){
.usage = SDL_GPU_TRANSFERBUFFERUSAGE_UPLOAD,
.size = total_size_needed
@@ -4358,8 +4317,6 @@ JSC_CCALL(gpu_upload,
} else
ret = JS_DupValue(js,js_transfer); // supplied transfer buffer is fine so we use it
} else {
// Need a new one
printf("NEED A NEW TRANSFER BUFFER\n");
transfer = SDL_CreateGPUTransferBuffer( gpu, &(SDL_GPUTransferBufferCreateInfo){
.usage = SDL_GPU_TRANSFERBUFFERUSAGE_UPLOAD,
.size = total_size_needed
@@ -4375,7 +4332,7 @@ JSC_CCALL(gpu_upload,
return JS_ThrowReferenceError(js, "Failed to begin copy pass");
}
void *mapped_data = SDL_MapGPUTransferBuffer(gpu, transfer, false);
void *mapped_data = SDL_MapGPUTransferBuffer(gpu, transfer, true);
if (!mapped_data) {
SDL_EndGPUCopyPass(copy_pass);
free(items);
@@ -4405,7 +4362,7 @@ JSC_CCALL(gpu_upload,
.offset = 0,
.size = items[i].size
},
false
true
);
current_offset += items[i].size;
}
@@ -4415,6 +4372,26 @@ JSC_CCALL(gpu_upload,
free(items);
)
JSC_CCALL(gpu_wait_for_fences,
SDL_GPUDevice *gpu = js2SDL_GPUDevice(js,self);
int n = js_arrlen(js,argv[0]);
SDL_GPUFence *fences[n];
for (int i = 0; i < n; i++) {
JSValue a = JS_GetPropertyUint32(js,argv[0],i);
fences[i] = js2SDL_GPUFence(js,a);
JS_FreeValue(js,a);
}
int wait_all = JS_ToBool(js,argv[1]);
return JS_NewBool(js,SDL_WaitForGPUFences(gpu,wait_all,fences,n));
)
JSC_CCALL(gpu_query_fence,
SDL_GPUDevice *gpu = js2SDL_GPUDevice(js,self);
SDL_GPUFence *fence = js2SDL_GPUFence(js,argv[0]);
return JS_NewBool(js,SDL_QueryGPUFence(gpu,fence));
)
static const JSCFunctionListEntry js_SDL_GPUDevice_funcs[] = {
MIST_FUNC_DEF(gpu, claim_window, 1),
MIST_FUNC_DEF(gpu, make_pipeline, 1), // loads pipeline state into an object
@@ -4422,8 +4399,6 @@ static const JSCFunctionListEntry js_SDL_GPUDevice_funcs[] = {
MIST_FUNC_DEF(gpu, set_pipeline, 1), // grabs the gpu property off a pipeline value to load
MIST_FUNC_DEF(gpu, load_texture, 2),
MIST_FUNC_DEF(gpu, logical_size, 1),
MIST_FUNC_DEF(gpu, newframe, 1),
MIST_FUNC_DEF(gpu, camera, 2),
MIST_FUNC_DEF(gpu, scale, 1),
MIST_FUNC_DEF(gpu, texture, 4),
// MIST_FUNC_DEF(gpu, geometry, 2),
@@ -4436,6 +4411,8 @@ static const JSCFunctionListEntry js_SDL_GPUDevice_funcs[] = {
MIST_FUNC_DEF(gpu, acquire_cmd_buffer, 0),
MIST_FUNC_DEF(gpu, upload, 3),
MIST_FUNC_DEF(gpu, mainRT, 1),
MIST_FUNC_DEF(gpu, wait_for_fences, 2),
MIST_FUNC_DEF(gpu, query_fence, 1)
};
JSC_CCALL(renderpass_bind_pipeline,
@@ -4447,11 +4424,16 @@ JSC_CCALL(renderpass_bind_pipeline,
JS_SetPropertyStr(js,self, "pipeline", JS_DupValue(js,argv[0]));
)
JSC_CCALL(renderpass_draw,
JSC_CCALL(renderpass_draw_indexed,
SDL_GPURenderPass *pass = js2SDL_GPURenderPass(js,self);
SDL_DrawGPUIndexedPrimitives(pass, js2number(js,argv[0]), js2number(js,argv[1]), js2number(js,argv[2]), js2number(js,argv[3]), js2number(js,argv[4]));
)
JSC_CCALL(renderpass_draw,
SDL_GPURenderPass *pass = js2SDL_GPURenderPass(js,self);
SDL_DrawGPUPrimitives(pass, js2number(js,argv[0]), js2number(js,argv[1]), js2number(js,argv[2]), js2number(js,argv[3]));
)
JSC_CCALL(renderpass_bind_buffers,
SDL_GPURenderPass *pass = js2SDL_GPURenderPass(js,self);
int first = js2number(js,argv[0]);
@@ -4517,7 +4499,8 @@ JSC_CCALL(renderpass_bind_storage_textures,
static const JSCFunctionListEntry js_SDL_GPURenderPass_funcs[] = {
MIST_FUNC_DEF(renderpass, bind_pipeline, 1),
MIST_FUNC_DEF(renderpass, draw, 5),
MIST_FUNC_DEF(renderpass, draw, 4),
MIST_FUNC_DEF(renderpass, draw_indexed, 5),
MIST_FUNC_DEF(renderpass, end, 0),
MIST_FUNC_DEF(renderpass, bind_index_buffer, 1),
MIST_FUNC_DEF(renderpass, bind_buffers, 3),
@@ -4583,6 +4566,7 @@ JSC_CCALL(cmd_render_pass,
if (!JS_IsObject(argv[0])) return JS_ThrowTypeError(js, "render_pass: Expected a render pass descriptor object");
JSValue passObj = argv[0];
colorf clear_color = js2color(js,argv[1]);
// Get colorTargets array
JSValue colorTargetsVal = JS_GetPropertyStr(js, passObj, "color_targets");
@@ -4602,7 +4586,7 @@ JSC_CCALL(cmd_render_pass,
JS_GETPROPSTR(js, ctargetVal, colortars[i], load_op, load_op)
JS_GETPROPSTR(js, ctargetVal, colortars[i], store_op, store_op)
// JS_GETPROPSTR(js, ctargetVal, colortars[i], clear_color, color)
colortars[i].clear_color = (SDL_FColor){0,0,0,1};
colortars[i].clear_color = (SDL_FColor){clear_color.r,clear_color.g,clear_color.b,clear_color.a};
JS_GETPROPSTR(js, ctargetVal, colortars[i], texture, SDL_GPUTexture)
// If you support resolve textures or other fields, retrieve them here.
JS_FreeValue(js, ctargetVal);
@@ -4686,9 +4670,19 @@ JSC_CCALL(cmd_push_fragment_uniform_data,
SDL_PushGPUFragmentUniformData(cmds, slot, data, buf_size);
)
JSC_CCALL(cmd_push_compute_uniform_data,
SDL_GPUCommandBuffer *cmds = js2SDL_GPUCommandBuffer(js, self);
int slot;
JS_ToInt32(js, &slot, argv[0]);
size_t buf_size;
void *data = JS_GetArrayBuffer(js, &buf_size, argv[1]);
SDL_PushGPUComputeUniformData(cmds, slot, data, buf_size);
)
JSC_CCALL(cmd_submit,
SDL_GPUCommandBuffer *cmds = js2SDL_GPUCommandBuffer(js,self);
SDL_SubmitGPUCommandBuffer(cmds);
SDL_GPUFence *fence = SDL_SubmitGPUCommandBufferAndAcquireFence(cmds);
return SDL_GPUFence2js(js,fence);
)
JSC_CCALL(cmd_camera,
@@ -4697,11 +4691,9 @@ JSC_CCALL(cmd_camera,
SDL_GPURenderPass *pass = js2SDL_GPURenderPass(js, argv[1]);
HMM_Vec2 size;
// Get window size
HMM_Vec2 drawsize;
JS_PULLPROPSTR(js,argv[1],size,vec2);
drawsize = size;
// Pull out camera transform and size
transform *transform;
JS_PULLPROPSTR(js, camera, size, vec2)
@@ -4814,20 +4806,39 @@ JSC_CCALL(cmd_camera,
sdlvp.max_depth = 1.0f;
// Set the final viewport and push uniform data
SDL_SetGPUViewport(pass, &sdlvp);
SDL_PushGPUVertexUniformData(cmds, 0, &data, sizeof(data));
// SDL_SetGPUViewport(pass, &sdlvp);
SDL_PushGPUVertexUniformData(cmds, js2number(js,argv[3]), &data, sizeof(data));
)
JSC_SCALL(cmd_push_debug_group,
SDL_GPUCommandBuffer *cmd = js2SDL_GPUCommandBuffer(js,self);
SDL_PushGPUDebugGroup(cmd,str);
)
JSC_CCALL(cmd_pop_debug_group,
SDL_GPUCommandBuffer *cmd = js2SDL_GPUCommandBuffer(js,self);
SDL_PopGPUDebugGroup(cmd);
)
JSC_SCALL(cmd_debug_label,
SDL_GPUCommandBuffer *cmd = js2SDL_GPUCommandBuffer(js,self);
SDL_InsertGPUDebugLabel(cmd, str);
)
static const JSCFunctionListEntry js_SDL_GPUCommandBuffer_funcs[] = {
MIST_FUNC_DEF(cmd, render_pass, 1),
MIST_FUNC_DEF(cmd, render_pass, 2),
MIST_FUNC_DEF(cmd, swapchain_pass, 1),
MIST_FUNC_DEF(cmd, bind_vertex_buffer, 2),
MIST_FUNC_DEF(cmd, bind_index_buffer, 1),
MIST_FUNC_DEF(cmd, bind_fragment_sampler, 3),
MIST_FUNC_DEF(cmd, push_vertex_uniform_data, 2),
MIST_FUNC_DEF(cmd, push_fragment_uniform_data, 2),
MIST_FUNC_DEF(cmd, push_compute_uniform_data, 2),
MIST_FUNC_DEF(cmd, submit, 0),
MIST_FUNC_DEF(cmd, camera, 3)
MIST_FUNC_DEF(cmd, camera, 4),
MIST_FUNC_DEF(cmd, push_debug_group, 1),
MIST_FUNC_DEF(cmd, pop_debug_group, 0),
MIST_FUNC_DEF(cmd, debug_label, 1),
};
JSC_CCALL(surface_blit,
@@ -5356,6 +5367,14 @@ JSC_CCALL(transform_rect,
t->rotation = QUAT1;
)
JSC_CCALL(transform_array,
transform *t = js2transform(js,self);
HMM_Mat4 m= transform2mat(t);
ret = JS_NewArray(js);
for (int i = 0; i < 16; i++)
JS_SetPropertyUint32(js,ret,i, number2js(js,m.em[i]));
)
static const JSCFunctionListEntry js_transform_funcs[] = {
CGETSET_ADD(transform, pos),
CGETSET_ADD(transform, scale),
@@ -5369,6 +5388,7 @@ static const JSCFunctionListEntry js_transform_funcs[] = {
MIST_FUNC_DEF(transform, direction, 1),
MIST_FUNC_DEF(transform, unit, 0),
MIST_FUNC_DEF(transform, rect, 1),
MIST_FUNC_DEF(transform, array, 0),
};
JSC_CCALL(datastream_time, return number2js(js,plm_get_time(js2datastream(js,self)->plm)); )
@@ -6130,7 +6150,8 @@ JSC_CCALL(os_make_line_prim,
JS_SetPropertyStr(js, prim, "uv", make_gpu_buffer(js, uv, sizeof(uv), JS_TYPED_ARRAY_FLOAT32,2,1,0));
JS_SetPropertyStr(js,prim,"vertices", number2js(js,m->num_vertices));
JS_SetPropertyStr(js,prim,"count", number2js(js,m->num_triangles*3));
JS_SetPropertyStr(js,prim,"num_indices", number2js(js,m->num_triangles*3));
JS_SetPropertyStr(js,prim,"first_index", number2js(js,0));
parsl_destroy_context(par_ctx);
@@ -6320,6 +6341,35 @@ JSC_CCALL(os_sleep,
SDL_DelayNS(time);
)
JSC_CCALL(os_battery_pct,
int pct;
SDL_PowerState state = SDL_GetPowerInfo(NULL, &pct);
return number2js(js,pct);
)
JSC_CCALL(os_battery_voltage,
)
JSC_CCALL(os_battery_seconds,
int seconds;
SDL_PowerState state = SDL_GetPowerInfo(&seconds, NULL);
return number2js(js,seconds);
)
JSC_CCALL(os_power_state,
SDL_PowerState state = SDL_GetPowerInfo(NULL, NULL);
switch(state) {
case SDL_POWERSTATE_ERROR: return JS_ThrowTypeError(js, "Error determining power status");
case SDL_POWERSTATE_UNKNOWN: return JS_UNDEFINED;
case SDL_POWERSTATE_ON_BATTERY: return JS_NewString(js, "on battery");
case SDL_POWERSTATE_NO_BATTERY: return JS_NewString(js, "no battery");
case SDL_POWERSTATE_CHARGING: return JS_NewString(js, "charging");
case SDL_POWERSTATE_CHARGED: return JS_NewString(js, "charged");
}
return JS_UNDEFINED;
)
static const JSCFunctionListEntry js_os_funcs[] = {
MIST_FUNC_DEF(os, turbulence, 4),
MIST_FUNC_DEF(os, model_buffer, 1),
@@ -6377,6 +6427,10 @@ static const JSCFunctionListEntry js_os_funcs[] = {
MIST_FUNC_DEF(os, kill, 1),
// MIST_FUNC_DEF(os, match_img, 2),
MIST_FUNC_DEF(os, sleep, 1),
MIST_FUNC_DEF(os, battery_pct, 0),
MIST_FUNC_DEF(os, battery_voltage, 0),
MIST_FUNC_DEF(os, battery_seconds, 0),
MIST_FUNC_DEF(os, power_state, 0),
};
#define JSSTATIC(NAME, PARENT) \