Compare commits

...

19 Commits

Author SHA1 Message Date
667683d3b3 Merge branch 'feature/init-name-resolution' 2026-02-25 16:33:14 +01:00
5b1b5db06c fix: derive adjacent meta stem from snapshot path, not instance name
The previous fix used split('.').next() to get the meta stem from the
snapshot path, which only takes the first dot-segment. This broke names
containing dots (e.g. "Name.new" → "Name.new.luau" would produce
"Name.meta.json" instead of "Name.new.meta.json").

Strip the full middleware extension (e.g. ".server.luau", ".txt") from
the snapshot path filename instead. This correctly handles all cases:
  Name.new.luau      → Name.new  → Name.new.meta.json
  _Init.server.luau  → _Init     → _Init.meta.json
  Name.new.txt       → Name.new  → Name.new.meta.json

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-25 16:33:09 +01:00
d7a9ce55db Merge branch 'feature/init-name-resolution' 2026-02-24 21:54:31 +01:00
33dd0f5ed1 fix: derive adjacent meta path from snapshot path, not instance name
When a script/txt/csv child is renamed by name_for_inst (e.g. "Init" →
"_Init.luau"), the adjacent meta file must follow the same name. All
three callers were using the Roblox instance name to construct the meta
path, producing "Init.meta.json" instead of "_Init.meta.json" — which
collides with the parent directory's "init.meta.json" on
case-insensitive file systems.

Fix by deriving the meta stem from the first dot-segment of the
snapshot path file name, which already holds the resolved name.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 21:53:53 +01:00
996113b177 Merge branch 'feature/init-name-resolution' 2026-02-24 01:06:22 +01:00
95fe993de3 feat: auto-resolve init-name conflicts during syncback
When a child instance has a Roblox name that would produce a filesystem
name of "init" (case-insensitive), syncback now automatically prefixes
it with '_' (e.g. "Init" → "_Init.luau") instead of erroring. The
corresponding meta.json writes the original name via the `name` property
so Rojo can restore it on the next snapshot.

The sibling dedup check is updated to use actual on-disk names for
existing children and the resolved (init-prefixed) name for new ones,
so genuine collisions still error while false positives from the `name`
property are avoided.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-24 01:05:31 +01:00
4ca26efccb Merge branch 'fix/git-since-live-sync' 2026-02-13 18:13:42 +01:00
ce0db54e0a Merge branch 'feature/dangerously-force-json' 2026-02-13 18:13:37 +01:00
b8106354b0 Fix --git-since not detecting first file change in filtered directories
The VFS only sets up file watches via read() and read_dir(), not
metadata(). When git filtering caused snapshot_from_vfs to return
early for $path directories, read_dir was never called, so no file
watch was established. This meant file modifications never generated
VFS events and were silently ignored until the server was restarted.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 18:04:27 +01:00
c552fdc52e Add --dangerously-force-json flag for syncback
Adds a CLI flag that forces syncback to use JSON representations
instead of binary .rbxm files. Instances with children become
directories with init.meta.json; leaf instances become .model.json.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 17:41:42 +01:00
0dc37ac848 Fix --git-since live sync not detecting changes and creating duplicates
Two issues prevented --git-since from working correctly during live sync:

1. Server: File changes weren't detected because git-filtered project nodes
   had empty relevant_paths, so the change processor couldn't map VFS events
   back to tree instances. Fixed by registering $path directories and the
   project folder in relevant_paths even when filtered.

2. Plugin: When a previously-filtered file was first acknowledged, it appeared
   as an ADD patch. The plugin created a new instance instead of adopting the
   existing one in Studio, causing duplicates. Fixed by checking for untracked
   children with matching Name+ClassName before calling Instance.new.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-13 16:19:01 +01:00
891b74b135 Merge branch 'git-track' into master 2026-02-13 14:11:17 +01:00
ari
18fdbce8b0 name-prop (#1)
Reviewed-on: #1
Co-authored-by: ari <git@astrid.email>
Co-committed-by: ari <git@astrid.email>
2026-02-13 13:09:30 +00:00
Ivan Matthew
a2adf2b517 Improves sourcemap path handling with pathdiff (#1217) 2026-02-12 19:17:28 -08:00
Micah
4deda0e155 Use msgpack for API (#1176) 2026-02-12 18:37:24 -08:00
ari
4df2d3c5f8 Add actor, bindables and remotes to json_model_classes (#1199) 2026-02-12 17:34:32 -08:00
boatbomber
4965165ad5 Add option to forget prior info for place in reminder notif (#1215) 2026-01-23 21:15:34 +00:00
boatbomber
68eab3479a Fix notification unmount thread cancel bug (#1211) 2026-01-19 16:35:19 -08:00
8053909bd0 Add --git-since option to rojo serve
- Add new GitFilter struct for tracking files changed since a Git reference
- Only sync changed (added/deleted/modified) files to Roblox Studio
- Files remain acknowledged once synced, even if content is reverted
- Add enhanced logging for debugging sync issues
- Force acknowledge project structure to prevent 'Cannot sync a model as a place' errors
2026-01-19 22:02:59 +01:00
47 changed files with 1499 additions and 200 deletions

3
.gitmodules vendored
View File

@@ -16,3 +16,6 @@
[submodule "plugin/Packages/Highlighter"] [submodule "plugin/Packages/Highlighter"]
path = plugin/Packages/Highlighter path = plugin/Packages/Highlighter
url = https://github.com/boatbomber/highlighter.git url = https://github.com/boatbomber/highlighter.git
[submodule "plugin/Packages/msgpack-luau"]
path = plugin/Packages/msgpack-luau
url = https://github.com/cipharius/msgpack-luau/

View File

@@ -30,15 +30,26 @@ Making a new release? Simply add the new header with the version and date undern
--> -->
## Unreleased ## Unreleased
* `inf` and `nan` values in properties are now synced ([#1176])
* Fixed a bug caused by having reference properties (such as `ObjectValue.Value`) that point to an Instance not included in syncback. ([#1179]) * Fixed a bug caused by having reference properties (such as `ObjectValue.Value`) that point to an Instance not included in syncback. ([#1179])
* Implemented support for the "name" property in meta/model JSON files. ([#1187]) * Implemented support for the "name" property in meta/model JSON files. ([#1187])
* Fixed instance replacement fallback failing when too many instances needed to be replaced. ([#1192]) * Fixed instance replacement fallback failing when too many instances needed to be replaced. ([#1192])
* Added actors and bindable/remote event/function variants to be synced back as JSON files. ([#1199])
* Fixed a bug where MacOS paths weren't being handled correctly. ([#1201]) * Fixed a bug where MacOS paths weren't being handled correctly. ([#1201])
* Fixed a bug where the notification timeout thread would fail to cancel on unmount ([#1211])
* Added a "Forget" option to the sync reminder notification to avoid being reminded for that place in the future ([#1215])
* Improves relative path calculation for sourcemap generation to avoid issues with Windows UNC paths. ([#1217])
[#1176]: https://github.com/rojo-rbx/rojo/pull/1176
[#1179]: https://github.com/rojo-rbx/rojo/pull/1179 [#1179]: https://github.com/rojo-rbx/rojo/pull/1179
[#1187]: https://github.com/rojo-rbx/rojo/pull/1187 [#1187]: https://github.com/rojo-rbx/rojo/pull/1187
[#1192]: https://github.com/rojo-rbx/rojo/pull/1192 [#1192]: https://github.com/rojo-rbx/rojo/pull/1192
[#1199]: https://github.com/rojo-rbx/rojo/pull/1199
[#1201]: https://github.com/rojo-rbx/rojo/pull/1201 [#1201]: https://github.com/rojo-rbx/rojo/pull/1201
[#1211]: https://github.com/rojo-rbx/rojo/pull/1211
[#1215]: https://github.com/rojo-rbx/rojo/pull/1215
[#1217]: https://github.com/rojo-rbx/rojo/pull/1217
## [7.7.0-rc.1] (November 27th, 2025) ## [7.7.0-rc.1] (November 27th, 2025)

View File

@@ -14,6 +14,7 @@ Code contributions are welcome for features and bugs that have been reported in
You'll want these tools to work on Rojo: You'll want these tools to work on Rojo:
* Latest stable Rust compiler * Latest stable Rust compiler
* Rustfmt and Clippy are used for code formatting and linting.
* Latest stable [Rojo](https://github.com/rojo-rbx/rojo) * Latest stable [Rojo](https://github.com/rojo-rbx/rojo)
* [Rokit](https://github.com/rojo-rbx/rokit) * [Rokit](https://github.com/rojo-rbx/rokit)
* [Luau Language Server](https://github.com/JohnnyMorganz/luau-lsp) (Only needed if working on the Studio plugin.) * [Luau Language Server](https://github.com/JohnnyMorganz/luau-lsp) (Only needed if working on the Studio plugin.)

19
Cargo.lock generated
View File

@@ -1520,6 +1520,12 @@ version = "1.0.15"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "57c0d7b74b563b49d38dae00a0c37d4d6de9b432382b2892f0574ddcae73fd0a" checksum = "57c0d7b74b563b49d38dae00a0c37d4d6de9b432382b2892f0574ddcae73fd0a"
[[package]]
name = "pathdiff"
version = "0.2.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df94ce210e5bc13cb6651479fa48d14f601d9858cfe0467f43ae157023b938d3"
[[package]] [[package]]
name = "percent-encoding" name = "percent-encoding"
version = "2.3.2" version = "2.3.2"
@@ -2068,6 +2074,7 @@ dependencies = [
"num_cpus", "num_cpus",
"opener", "opener",
"paste", "paste",
"pathdiff",
"pretty_assertions", "pretty_assertions",
"profiling", "profiling",
"rayon", "rayon",
@@ -2078,10 +2085,12 @@ dependencies = [
"rbx_xml", "rbx_xml",
"reqwest", "reqwest",
"ritz", "ritz",
"rmp-serde",
"roblox_install", "roblox_install",
"rojo-insta-ext", "rojo-insta-ext",
"semver", "semver",
"serde", "serde",
"serde_bytes",
"serde_json", "serde_json",
"serde_yaml", "serde_yaml",
"strum", "strum",
@@ -2222,6 +2231,16 @@ dependencies = [
"serde_derive", "serde_derive",
] ]
[[package]]
name = "serde_bytes"
version = "0.11.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a5d440709e79d88e51ac01c4b72fc6cb7314017bb7da9eeff678aa94c10e3ea8"
dependencies = [
"serde",
"serde_core",
]
[[package]] [[package]]
name = "serde_cbor" name = "serde_cbor"
version = "0.11.2" version = "0.11.2"

View File

@@ -100,10 +100,13 @@ clap = { version = "3.2.25", features = ["derive"] }
profiling = "1.0.15" profiling = "1.0.15"
yaml-rust2 = "0.10.3" yaml-rust2 = "0.10.3"
data-encoding = "2.8.0" data-encoding = "2.8.0"
pathdiff = "0.2.3"
blake3 = "1.5.0" blake3 = "1.5.0"
float-cmp = "0.9.0" float-cmp = "0.9.0"
indexmap = { version = "2.10.0", features = ["serde"] } indexmap = { version = "2.10.0", features = ["serde"] }
rmp-serde = "1.3.0"
serde_bytes = "0.11.19"
[target.'cfg(windows)'.dependencies] [target.'cfg(windows)'.dependencies]
winreg = "0.10.1" winreg = "0.10.1"
@@ -122,7 +125,7 @@ semver = "1.0.22"
rojo-insta-ext = { path = "crates/rojo-insta-ext" } rojo-insta-ext = { path = "crates/rojo-insta-ext" }
criterion = "0.3.6" criterion = "0.3.6"
insta = { version = "1.36.1", features = ["redactions", "yaml"] } insta = { version = "1.36.1", features = ["redactions", "yaml", "json"] }
paste = "1.0.14" paste = "1.0.14"
pretty_assertions = "1.4.0" pretty_assertions = "1.4.0"
serde_yaml = "0.8.26" serde_yaml = "0.8.26"

View File

@@ -30,6 +30,11 @@ fn snapshot_from_fs_path(path: &Path) -> io::Result<VfsSnapshot> {
continue; continue;
} }
// Ignore images in msgpack-luau because they aren't UTF-8 encoded.
if file_name.ends_with(".png") {
continue;
}
let child_snapshot = snapshot_from_fs_path(&entry.path())?; let child_snapshot = snapshot_from_fs_path(&entry.path())?;
children.push((file_name, child_snapshot)); children.push((file_name, child_snapshot));
} }

View File

@@ -1,5 +1,7 @@
local HttpService = game:GetService("HttpService") local HttpService = game:GetService("HttpService")
local msgpack = require(script.Parent.Parent.msgpack)
local stringTemplate = [[ local stringTemplate = [[
Http.Response { Http.Response {
code: %d code: %d
@@ -31,4 +33,8 @@ function Response:json()
return HttpService:JSONDecode(self.body) return HttpService:JSONDecode(self.body)
end end
function Response:msgpack()
return msgpack.decode(self.body)
end
return Response return Response

View File

@@ -1,7 +1,8 @@
local HttpService = game:GetService("HttpService") local HttpService = game:GetService("HttpService")
local Promise = require(script.Parent.Promise)
local Log = require(script.Parent.Log) local Log = require(script.Parent.Log)
local msgpack = require(script.Parent.msgpack)
local Promise = require(script.Parent.Promise)
local HttpError = require(script.Error) local HttpError = require(script.Error)
local HttpResponse = require(script.Response) local HttpResponse = require(script.Response)
@@ -68,4 +69,12 @@ function Http.jsonDecode(source)
return HttpService:JSONDecode(source) return HttpService:JSONDecode(source)
end end
function Http.msgpackEncode(object)
return msgpack.encode(object)
end
function Http.msgpackDecode(source)
return msgpack.decode(source)
end
return Http return Http

View File

@@ -145,7 +145,7 @@ function ApiContext:connect()
return Http.get(url) return Http.get(url)
:andThen(rejectFailedRequests) :andThen(rejectFailedRequests)
:andThen(Http.Response.json) :andThen(Http.Response.msgpack)
:andThen(rejectWrongProtocolVersion) :andThen(rejectWrongProtocolVersion)
:andThen(function(body) :andThen(function(body)
assert(validateApiInfo(body)) assert(validateApiInfo(body))
@@ -163,7 +163,7 @@ end
function ApiContext:read(ids) function ApiContext:read(ids)
local url = ("%s/api/read/%s"):format(self.__baseUrl, table.concat(ids, ",")) local url = ("%s/api/read/%s"):format(self.__baseUrl, table.concat(ids, ","))
return Http.get(url):andThen(rejectFailedRequests):andThen(Http.Response.json):andThen(function(body) return Http.get(url):andThen(rejectFailedRequests):andThen(Http.Response.msgpack):andThen(function(body)
if body.sessionId ~= self.__sessionId then if body.sessionId ~= self.__sessionId then
return Promise.reject("Server changed ID") return Promise.reject("Server changed ID")
end end
@@ -191,9 +191,9 @@ function ApiContext:write(patch)
table.insert(updated, fixedUpdate) table.insert(updated, fixedUpdate)
end end
-- Only add the 'added' field if the table is non-empty, or else Roblox's -- Only add the 'added' field if the table is non-empty, or else the msgpack
-- JSON implementation will turn the table into an array instead of an -- encode implementation will turn the table into an array instead of a map,
-- object, causing API validation to fail. -- causing API validation to fail.
local added local added
if next(patch.added) ~= nil then if next(patch.added) ~= nil then
added = patch.added added = patch.added
@@ -206,13 +206,16 @@ function ApiContext:write(patch)
added = added, added = added,
} }
body = Http.jsonEncode(body) body = Http.msgpackEncode(body)
return Http.post(url, body):andThen(rejectFailedRequests):andThen(Http.Response.json):andThen(function(responseBody) return Http.post(url, body)
Log.info("Write response: {:?}", responseBody) :andThen(rejectFailedRequests)
:andThen(Http.Response.msgpack)
:andThen(function(responseBody)
Log.info("Write response: {:?}", responseBody)
return responseBody return responseBody
end) end)
end end
function ApiContext:connectWebSocket(packetHandlers) function ApiContext:connectWebSocket(packetHandlers)
@@ -234,7 +237,7 @@ function ApiContext:connectWebSocket(packetHandlers)
local closed, errored, received local closed, errored, received
received = self.__wsClient.MessageReceived:Connect(function(msg) received = self.__wsClient.MessageReceived:Connect(function(msg)
local data = Http.jsonDecode(msg) local data = Http.msgpackDecode(msg)
if data.sessionId ~= self.__sessionId then if data.sessionId ~= self.__sessionId then
Log.warn("Received message with wrong session ID; ignoring") Log.warn("Received message with wrong session ID; ignoring")
return return
@@ -280,7 +283,7 @@ end
function ApiContext:open(id) function ApiContext:open(id)
local url = ("%s/api/open/%s"):format(self.__baseUrl, id) local url = ("%s/api/open/%s"):format(self.__baseUrl, id)
return Http.post(url, ""):andThen(rejectFailedRequests):andThen(Http.Response.json):andThen(function(body) return Http.post(url, ""):andThen(rejectFailedRequests):andThen(Http.Response.msgpack):andThen(function(body)
if body.sessionId ~= self.__sessionId then if body.sessionId ~= self.__sessionId then
return Promise.reject("Server changed ID") return Promise.reject("Server changed ID")
end end
@@ -291,11 +294,11 @@ end
function ApiContext:serialize(ids: { string }) function ApiContext:serialize(ids: { string })
local url = ("%s/api/serialize"):format(self.__baseUrl) local url = ("%s/api/serialize"):format(self.__baseUrl)
local request_body = Http.jsonEncode({ sessionId = self.__sessionId, ids = ids }) local request_body = Http.msgpackEncode({ sessionId = self.__sessionId, ids = ids })
return Http.post(url, request_body) return Http.post(url, request_body)
:andThen(rejectFailedRequests) :andThen(rejectFailedRequests)
:andThen(Http.Response.json) :andThen(Http.Response.msgpack)
:andThen(function(response_body) :andThen(function(response_body)
if response_body.sessionId ~= self.__sessionId then if response_body.sessionId ~= self.__sessionId then
return Promise.reject("Server changed ID") return Promise.reject("Server changed ID")
@@ -309,11 +312,11 @@ end
function ApiContext:refPatch(ids: { string }) function ApiContext:refPatch(ids: { string })
local url = ("%s/api/ref-patch"):format(self.__baseUrl) local url = ("%s/api/ref-patch"):format(self.__baseUrl)
local request_body = Http.jsonEncode({ sessionId = self.__sessionId, ids = ids }) local request_body = Http.msgpackEncode({ sessionId = self.__sessionId, ids = ids })
return Http.post(url, request_body) return Http.post(url, request_body)
:andThen(rejectFailedRequests) :andThen(rejectFailedRequests)
:andThen(Http.Response.json) :andThen(Http.Response.msgpack)
:andThen(function(response_body) :andThen(function(response_body)
if response_body.sessionId ~= self.__sessionId then if response_body.sessionId ~= self.__sessionId then
return Promise.reject("Server changed ID") return Promise.reject("Server changed ID")

View File

@@ -19,9 +19,15 @@ local FullscreenNotification = Roact.Component:extend("FullscreeFullscreenNotifi
function FullscreenNotification:init() function FullscreenNotification:init()
self.transparency, self.setTransparency = Roact.createBinding(0) self.transparency, self.setTransparency = Roact.createBinding(0)
self.lifetime = self.props.timeout self.lifetime = self.props.timeout
self.dismissed = false
end end
function FullscreenNotification:dismiss() function FullscreenNotification:dismiss()
if self.dismissed then
return
end
self.dismissed = true
if self.props.onClose then if self.props.onClose then
self.props.onClose() self.props.onClose()
end end
@@ -59,7 +65,7 @@ function FullscreenNotification:didMount()
end end
function FullscreenNotification:willUnmount() function FullscreenNotification:willUnmount()
if self.timeout and coroutine.status(self.timeout) ~= "dead" then if self.timeout and coroutine.status(self.timeout) == "suspended" then
task.cancel(self.timeout) task.cancel(self.timeout)
end end
end end

View File

@@ -25,6 +25,7 @@ function Notification:init()
self.binding = bindingUtil.fromMotor(self.motor) self.binding = bindingUtil.fromMotor(self.motor)
self.lifetime = self.props.timeout self.lifetime = self.props.timeout
self.dismissed = false
self.motor:onStep(function(value) self.motor:onStep(function(value)
if value <= 0 and self.props.onClose then if value <= 0 and self.props.onClose then
@@ -34,6 +35,11 @@ function Notification:init()
end end
function Notification:dismiss() function Notification:dismiss()
if self.dismissed then
return
end
self.dismissed = true
self.motor:setGoal(Flipper.Spring.new(0, { self.motor:setGoal(Flipper.Spring.new(0, {
frequency = 5, frequency = 5,
dampingRatio = 1, dampingRatio = 1,
@@ -75,7 +81,7 @@ function Notification:didMount()
end end
function Notification:willUnmount() function Notification:willUnmount()
if self.timeout and coroutine.status(self.timeout) ~= "dead" then if self.timeout and coroutine.status(self.timeout) == "suspended" then
task.cancel(self.timeout) task.cancel(self.timeout)
end end
end end

View File

@@ -301,6 +301,19 @@ function App:setPriorSyncInfo(host: string, port: string, projectName: string)
Settings:set("priorEndpoints", priorSyncInfos) Settings:set("priorEndpoints", priorSyncInfos)
end end
function App:forgetPriorSyncInfo()
local priorSyncInfos = Settings:get("priorEndpoints")
if not priorSyncInfos then
priorSyncInfos = {}
end
local id = tostring(game.PlaceId)
priorSyncInfos[id] = nil
Log.trace("Erased last used endpoint for {}", game.PlaceId)
Settings:set("priorEndpoints", priorSyncInfos)
end
function App:getHostAndPort() function App:getHostAndPort()
local host = self.host:getValue() local host = self.host:getValue()
local port = self.port:getValue() local port = self.port:getValue()
@@ -435,7 +448,8 @@ function App:checkSyncReminder()
self:findActiveServer() self:findActiveServer()
:andThen(function(serverInfo, host, port) :andThen(function(serverInfo, host, port)
self:sendSyncReminder( self:sendSyncReminder(
`Project '{serverInfo.projectName}' is serving at {host}:{port}.\nWould you like to connect?` `Project '{serverInfo.projectName}' is serving at {host}:{port}.\nWould you like to connect?`,
{ "Connect", "Dismiss" }
) )
end) end)
:catch(function() :catch(function()
@@ -446,7 +460,8 @@ function App:checkSyncReminder()
local timeSinceSync = timeUtil.elapsedToText(os.time() - priorSyncInfo.timestamp) local timeSinceSync = timeUtil.elapsedToText(os.time() - priorSyncInfo.timestamp)
self:sendSyncReminder( self:sendSyncReminder(
`You synced project '{priorSyncInfo.projectName}' to this place {timeSinceSync}.\nDid you mean to run 'rojo serve' and then connect?` `You synced project '{priorSyncInfo.projectName}' to this place {timeSinceSync}.\nDid you mean to run 'rojo serve' and then connect?`,
{ "Connect", "Forget", "Dismiss" }
) )
end end
end) end)
@@ -486,12 +501,16 @@ function App:stopSyncReminderPolling()
end end
end end
function App:sendSyncReminder(message: string) function App:sendSyncReminder(message: string, shownActions: { string })
local syncReminderMode = Settings:get("syncReminderMode") local syncReminderMode = Settings:get("syncReminderMode")
if syncReminderMode == "None" then if syncReminderMode == "None" then
return return
end end
local connectIndex = table.find(shownActions, "Connect")
local forgetIndex = table.find(shownActions, "Forget")
local dismissIndex = table.find(shownActions, "Dismiss")
self.dismissSyncReminder = self:addNotification({ self.dismissSyncReminder = self:addNotification({
text = message, text = message,
timeout = 120, timeout = 120,
@@ -500,24 +519,39 @@ function App:sendSyncReminder(message: string)
self.dismissSyncReminder = nil self.dismissSyncReminder = nil
end, end,
actions = { actions = {
Connect = { Connect = if connectIndex
text = "Connect", then {
style = "Solid", text = "Connect",
layoutOrder = 1, style = "Solid",
onClick = function() layoutOrder = connectIndex,
self:startSession() onClick = function()
end, self:startSession()
}, end,
Dismiss = { }
text = "Dismiss", else nil,
style = "Bordered", Forget = if forgetIndex
layoutOrder = 2, then {
onClick = function() text = "Forget",
-- If the user dismisses the reminder, style = "Bordered",
-- then we don't need to remind them again layoutOrder = forgetIndex,
self:stopSyncReminderPolling() onClick = function()
end, -- The user doesn't want to be reminded again about this sync
}, self:forgetPriorSyncInfo()
end,
}
else nil,
Dismiss = if dismissIndex
then {
text = "Dismiss",
style = "Bordered",
layoutOrder = dismissIndex,
onClick = function()
-- If the user dismisses the reminder,
-- then we don't need to remind them again
self:stopSyncReminderPolling()
end,
}
else nil,
}, },
}) })
end end

View File

@@ -54,6 +54,10 @@ local function trueEquals(a, b): boolean
end end
return true return true
-- For NaN, check if both values are not equal to themselves
elseif a ~= a and b ~= b then
return true
-- For numbers, compare with epsilon of 0.0001 to avoid floating point inequality -- For numbers, compare with epsilon of 0.0001 to avoid floating point inequality
elseif typeA == "number" and typeB == "number" then elseif typeA == "number" and typeB == "number" then
return fuzzyEq(a, b, 0.0001) return fuzzyEq(a, b, 0.0001)

View File

@@ -41,14 +41,41 @@ function reifyInstanceInner(unappliedPatch, deferredRefs, instanceMap, virtualIn
invariant("Cannot reify an instance not present in virtualInstances\nID: {}", id) invariant("Cannot reify an instance not present in virtualInstances\nID: {}", id)
end end
-- Instance.new can fail if we're passing in something that can't be -- Before creating a new instance, check if the parent already has an
-- created, like a service, something enabled with a feature flag, or -- untracked child with the same Name and ClassName. This enables "late
-- something that requires higher security than we have. -- adoption" of instances that exist in Studio but weren't in the initial
local createSuccess, instance = pcall(Instance.new, virtualInstance.ClassName) -- Rojo tree (e.g., when using --git-since filtering). Without this,
-- newly acknowledged files would create duplicate instances.
local adoptedExisting = false
local instance = nil
if not createSuccess then for _, child in ipairs(parentInstance:GetChildren()) do
addAllToPatch(unappliedPatch, virtualInstances, id) local accessSuccess, name, className = pcall(function()
return return child.Name, child.ClassName
end)
if accessSuccess
and name == virtualInstance.Name
and className == virtualInstance.ClassName
and instanceMap.fromInstances[child] == nil
then
instance = child
adoptedExisting = true
break
end
end
if not adoptedExisting then
-- Instance.new can fail if we're passing in something that can't be
-- created, like a service, something enabled with a feature flag, or
-- something that requires higher security than we have.
local createSuccess
createSuccess, instance = pcall(Instance.new, virtualInstance.ClassName)
if not createSuccess then
addAllToPatch(unappliedPatch, virtualInstances, id)
return
end
end end
-- TODO: Can this fail? Previous versions of Rojo guarded against this, but -- TODO: Can this fail? Previous versions of Rojo guarded against this, but
@@ -96,7 +123,9 @@ function reifyInstanceInner(unappliedPatch, deferredRefs, instanceMap, virtualIn
reifyInstanceInner(unappliedPatch, deferredRefs, instanceMap, virtualInstances, childId, instance) reifyInstanceInner(unappliedPatch, deferredRefs, instanceMap, virtualInstances, childId, instance)
end end
instance.Parent = parentInstance if not adoptedExisting then
instance.Parent = parentInstance
end
instanceMap:insert(id, instance) instanceMap:insert(id, instance)
end end

View File

@@ -9,6 +9,7 @@ use std::{
}; };
use crate::{ use crate::{
git::SharedGitFilter,
message_queue::MessageQueue, message_queue::MessageQueue,
snapshot::{ snapshot::{
apply_patch_set, compute_patch_set, AppliedPatchSet, InstigatingSource, PatchSet, RojoTree, apply_patch_set, compute_patch_set, AppliedPatchSet, InstigatingSource, PatchSet, RojoTree,
@@ -46,11 +47,15 @@ pub struct ChangeProcessor {
impl ChangeProcessor { impl ChangeProcessor {
/// Spin up the ChangeProcessor, connecting it to the given tree, VFS, and /// Spin up the ChangeProcessor, connecting it to the given tree, VFS, and
/// outbound message queue. /// outbound message queue.
///
/// If `git_filter` is provided, it will be refreshed on every VFS event
/// to ensure newly changed files are acknowledged.
pub fn start( pub fn start(
tree: Arc<Mutex<RojoTree>>, tree: Arc<Mutex<RojoTree>>,
vfs: Arc<Vfs>, vfs: Arc<Vfs>,
message_queue: Arc<MessageQueue<AppliedPatchSet>>, message_queue: Arc<MessageQueue<AppliedPatchSet>>,
tree_mutation_receiver: Receiver<PatchSet>, tree_mutation_receiver: Receiver<PatchSet>,
git_filter: Option<SharedGitFilter>,
) -> Self { ) -> Self {
let (shutdown_sender, shutdown_receiver) = crossbeam_channel::bounded(1); let (shutdown_sender, shutdown_receiver) = crossbeam_channel::bounded(1);
let vfs_receiver = vfs.event_receiver(); let vfs_receiver = vfs.event_receiver();
@@ -58,6 +63,7 @@ impl ChangeProcessor {
tree, tree,
vfs, vfs,
message_queue, message_queue,
git_filter,
}; };
let job_thread = jod_thread::Builder::new() let job_thread = jod_thread::Builder::new()
@@ -111,6 +117,10 @@ struct JobThreadContext {
/// Whenever changes are applied to the DOM, we should push those changes /// Whenever changes are applied to the DOM, we should push those changes
/// into this message queue to inform any connected clients. /// into this message queue to inform any connected clients.
message_queue: Arc<MessageQueue<AppliedPatchSet>>, message_queue: Arc<MessageQueue<AppliedPatchSet>>,
/// Optional Git filter for --git-since mode. When set, will be refreshed
/// on every VFS event to ensure newly changed files are acknowledged.
git_filter: Option<SharedGitFilter>,
} }
impl JobThreadContext { impl JobThreadContext {
@@ -160,6 +170,14 @@ impl JobThreadContext {
fn handle_vfs_event(&self, event: VfsEvent) { fn handle_vfs_event(&self, event: VfsEvent) {
log::trace!("Vfs event: {:?}", event); log::trace!("Vfs event: {:?}", event);
// If we have a git filter, refresh it to pick up any new changes.
// This ensures that files modified during the session will be acknowledged.
if let Some(ref git_filter) = self.git_filter {
if let Err(err) = git_filter.refresh() {
log::warn!("Failed to refresh git filter: {:?}", err);
}
}
// Update the VFS immediately with the event. // Update the VFS immediately with the event.
self.vfs self.vfs
.commit_event(&event) .commit_event(&event)

View File

@@ -81,7 +81,7 @@ impl BuildCommand {
let vfs = Vfs::new_default(); let vfs = Vfs::new_default();
vfs.set_watch_enabled(self.watch); vfs.set_watch_enabled(self.watch);
let session = ServeSession::new(vfs, project_path)?; let session = ServeSession::new(vfs, project_path, None)?;
let mut cursor = session.message_queue().cursor(); let mut cursor = session.message_queue().cursor();
write_model(&session, &output_path, output_kind)?; write_model(&session, &output_path, output_kind)?;

View File

@@ -54,7 +54,7 @@ fn initialize_plugin() -> anyhow::Result<ServeSession> {
in_memory_fs.load_snapshot("/plugin", plugin_snapshot)?; in_memory_fs.load_snapshot("/plugin", plugin_snapshot)?;
let vfs = Vfs::new(in_memory_fs); let vfs = Vfs::new(in_memory_fs);
Ok(ServeSession::new(vfs, "/plugin")?) Ok(ServeSession::new(vfs, "/plugin", None)?)
} }
fn install_plugin() -> anyhow::Result<()> { fn install_plugin() -> anyhow::Result<()> {

View File

@@ -9,7 +9,7 @@ use clap::Parser;
use memofs::Vfs; use memofs::Vfs;
use termcolor::{BufferWriter, Color, ColorChoice, ColorSpec, WriteColor}; use termcolor::{BufferWriter, Color, ColorChoice, ColorSpec, WriteColor};
use crate::{serve_session::ServeSession, web::LiveServer}; use crate::{git::GitFilter, serve_session::ServeSession, web::LiveServer};
use super::{resolve_path, GlobalOptions}; use super::{resolve_path, GlobalOptions};
@@ -31,6 +31,19 @@ pub struct ServeCommand {
/// it has none. /// it has none.
#[clap(long)] #[clap(long)]
pub port: Option<u16>, pub port: Option<u16>,
/// Only sync files that have changed since the given Git reference.
///
/// When this option is set, Rojo will only include files that have been
/// modified, added, or are untracked since the specified Git reference
/// (e.g., "HEAD", "main", a commit hash). This is useful for working with
/// large projects where you only want to sync your local changes.
///
/// Scripts that have not changed will still be acknowledged if modified
/// during the session, and all synced instances will have
/// ignoreUnknownInstances set to true to preserve descendants in Studio.
#[clap(long, value_name = "REF")]
pub git_since: Option<String>,
} }
impl ServeCommand { impl ServeCommand {
@@ -39,7 +52,19 @@ impl ServeCommand {
let vfs = Vfs::new_default(); let vfs = Vfs::new_default();
let session = Arc::new(ServeSession::new(vfs, project_path)?); // Set up Git filter if --git-since was specified
let git_filter = if let Some(ref base_ref) = self.git_since {
let repo_root = GitFilter::find_repo_root(&project_path)?;
log::info!(
"Git filter enabled: only syncing files changed since '{}'",
base_ref
);
Some(Arc::new(GitFilter::new(repo_root, base_ref.clone(), &project_path)?))
} else {
None
};
let session = Arc::new(ServeSession::new(vfs, project_path, git_filter)?);
let ip = self let ip = self
.address .address
@@ -53,17 +78,25 @@ impl ServeCommand {
let server = LiveServer::new(session); let server = LiveServer::new(session);
let _ = show_start_message(ip, port, global.color.into()); let _ = show_start_message(ip, port, self.git_since.as_deref(), global.color.into());
server.start((ip, port).into()); server.start((ip, port).into());
Ok(()) Ok(())
} }
} }
fn show_start_message(bind_address: IpAddr, port: u16, color: ColorChoice) -> io::Result<()> { fn show_start_message(
bind_address: IpAddr,
port: u16,
git_since: Option<&str>,
color: ColorChoice,
) -> io::Result<()> {
let mut green = ColorSpec::new(); let mut green = ColorSpec::new();
green.set_fg(Some(Color::Green)).set_bold(true); green.set_fg(Some(Color::Green)).set_bold(true);
let mut yellow = ColorSpec::new();
yellow.set_fg(Some(Color::Yellow)).set_bold(true);
let writer = BufferWriter::stdout(color); let writer = BufferWriter::stdout(color);
let mut buffer = writer.buffer(); let mut buffer = writer.buffer();
@@ -84,6 +117,13 @@ fn show_start_message(bind_address: IpAddr, port: u16, color: ColorChoice) -> io
buffer.set_color(&green)?; buffer.set_color(&green)?;
writeln!(&mut buffer, "{}", port)?; writeln!(&mut buffer, "{}", port)?;
if let Some(base_ref) = git_since {
buffer.set_color(&ColorSpec::new())?;
write!(&mut buffer, " Mode: ")?;
buffer.set_color(&yellow)?;
writeln!(&mut buffer, "git-since ({})", base_ref)?;
}
writeln!(&mut buffer)?; writeln!(&mut buffer)?;
buffer.set_color(&ColorSpec::new())?; buffer.set_color(&ColorSpec::new())?;

View File

@@ -0,0 +1,35 @@
---
source: src/cli/sourcemap.rs
expression: sourcemap_contents
---
{
"name": "default",
"className": "DataModel",
"filePaths": "[...1 path omitted...]",
"children": [
{
"name": "ReplicatedStorage",
"className": "ReplicatedStorage",
"children": [
{
"name": "Project",
"className": "ModuleScript",
"filePaths": "[...1 path omitted...]",
"children": [
{
"name": "Module",
"className": "Folder",
"children": [
{
"name": "module",
"className": "ModuleScript",
"filePaths": "[...1 path omitted...]"
}
]
}
]
}
]
}
]
}

View File

@@ -0,0 +1,41 @@
---
source: src/cli/sourcemap.rs
expression: sourcemap_contents
---
{
"name": "default",
"className": "DataModel",
"filePaths": [
"default.project.json"
],
"children": [
{
"name": "ReplicatedStorage",
"className": "ReplicatedStorage",
"children": [
{
"name": "Project",
"className": "ModuleScript",
"filePaths": [
"src/init.luau"
],
"children": [
{
"name": "Module",
"className": "Folder",
"children": [
{
"name": "module",
"className": "ModuleScript",
"filePaths": [
"../module/module.luau"
]
}
]
}
]
}
]
}
]
}

View File

@@ -10,7 +10,7 @@ use fs_err::File;
use memofs::Vfs; use memofs::Vfs;
use rayon::prelude::*; use rayon::prelude::*;
use rbx_dom_weak::{types::Ref, Ustr}; use rbx_dom_weak::{types::Ref, Ustr};
use serde::Serialize; use serde::{Deserialize, Serialize};
use tokio::runtime::Runtime; use tokio::runtime::Runtime;
use crate::{ use crate::{
@@ -24,19 +24,20 @@ const PATH_STRIP_FAILED_ERR: &str = "Failed to create relative paths for project
const ABSOLUTE_PATH_FAILED_ERR: &str = "Failed to turn relative path into absolute path!"; const ABSOLUTE_PATH_FAILED_ERR: &str = "Failed to turn relative path into absolute path!";
/// Representation of a node in the generated sourcemap tree. /// Representation of a node in the generated sourcemap tree.
#[derive(Serialize)] #[derive(Serialize, Deserialize)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
struct SourcemapNode<'a> { struct SourcemapNode<'a> {
name: &'a str, name: &'a str,
class_name: Ustr, class_name: Ustr,
#[serde( #[serde(
default,
skip_serializing_if = "Vec::is_empty", skip_serializing_if = "Vec::is_empty",
serialize_with = "crate::path_serializer::serialize_vec_absolute" serialize_with = "crate::path_serializer::serialize_vec_absolute"
)] )]
file_paths: Vec<Cow<'a, Path>>, file_paths: Vec<Cow<'a, Path>>,
#[serde(skip_serializing_if = "Vec::is_empty")] #[serde(default, skip_serializing_if = "Vec::is_empty")]
children: Vec<SourcemapNode<'a>>, children: Vec<SourcemapNode<'a>>,
} }
@@ -70,13 +71,14 @@ pub struct SourcemapCommand {
impl SourcemapCommand { impl SourcemapCommand {
pub fn run(self) -> anyhow::Result<()> { pub fn run(self) -> anyhow::Result<()> {
let project_path = resolve_path(&self.project); let project_path = fs_err::canonicalize(resolve_path(&self.project))?;
log::trace!("Constructing in-memory filesystem"); log::trace!("Constructing filesystem with StdBackend");
let vfs = Vfs::new_default(); let vfs = Vfs::new_default();
vfs.set_watch_enabled(self.watch); vfs.set_watch_enabled(self.watch);
let session = ServeSession::new(vfs, project_path)?; log::trace!("Setting up session for sourcemap generation");
let session = ServeSession::new(vfs, project_path, None)?;
let mut cursor = session.message_queue().cursor(); let mut cursor = session.message_queue().cursor();
let filter = if self.include_non_scripts { let filter = if self.include_non_scripts {
@@ -87,14 +89,17 @@ impl SourcemapCommand {
// Pre-build a rayon threadpool with a low number of threads to avoid // Pre-build a rayon threadpool with a low number of threads to avoid
// dynamic creation overhead on systems with a high number of cpus. // dynamic creation overhead on systems with a high number of cpus.
log::trace!("Setting rayon global threadpool");
rayon::ThreadPoolBuilder::new() rayon::ThreadPoolBuilder::new()
.num_threads(num_cpus::get().min(6)) .num_threads(num_cpus::get().min(6))
.build_global() .build_global()
.unwrap(); .ok();
log::trace!("Writing initial sourcemap");
write_sourcemap(&session, self.output.as_deref(), filter, self.absolute)?; write_sourcemap(&session, self.output.as_deref(), filter, self.absolute)?;
if self.watch { if self.watch {
log::trace!("Setting up runtime for watch mode");
let rt = Runtime::new().unwrap(); let rt = Runtime::new().unwrap();
loop { loop {
@@ -208,7 +213,7 @@ fn recurse_create_node<'a>(
} else { } else {
for val in file_paths { for val in file_paths {
output_file_paths.push(Cow::from( output_file_paths.push(Cow::from(
val.strip_prefix(project_dir).expect(PATH_STRIP_FAILED_ERR), pathdiff::diff_paths(val, project_dir).expect(PATH_STRIP_FAILED_ERR),
)); ));
} }
}; };
@@ -250,3 +255,80 @@ fn write_sourcemap(
Ok(()) Ok(())
} }
#[cfg(test)]
mod test {
use crate::cli::sourcemap::SourcemapNode;
use crate::cli::SourcemapCommand;
use insta::internals::Content;
use std::path::Path;
#[test]
fn maps_relative_paths() {
let sourcemap_dir = tempfile::tempdir().unwrap();
let sourcemap_output = sourcemap_dir.path().join("sourcemap.json");
let project_path = fs_err::canonicalize(
Path::new(env!("CARGO_MANIFEST_DIR"))
.join("test-projects")
.join("relative_paths")
.join("project"),
)
.unwrap();
let sourcemap_command = SourcemapCommand {
project: project_path,
output: Some(sourcemap_output.clone()),
include_non_scripts: false,
watch: false,
absolute: false,
};
assert!(sourcemap_command.run().is_ok());
let raw_sourcemap_contents = fs_err::read_to_string(sourcemap_output.as_path()).unwrap();
let sourcemap_contents =
serde_json::from_str::<SourcemapNode>(&raw_sourcemap_contents).unwrap();
insta::assert_json_snapshot!(sourcemap_contents);
}
#[test]
fn maps_absolute_paths() {
let sourcemap_dir = tempfile::tempdir().unwrap();
let sourcemap_output = sourcemap_dir.path().join("sourcemap.json");
let project_path = fs_err::canonicalize(
Path::new(env!("CARGO_MANIFEST_DIR"))
.join("test-projects")
.join("relative_paths")
.join("project"),
)
.unwrap();
let sourcemap_command = SourcemapCommand {
project: project_path,
output: Some(sourcemap_output.clone()),
include_non_scripts: false,
watch: false,
absolute: true,
};
assert!(sourcemap_command.run().is_ok());
let raw_sourcemap_contents = fs_err::read_to_string(sourcemap_output.as_path()).unwrap();
let sourcemap_contents =
serde_json::from_str::<SourcemapNode>(&raw_sourcemap_contents).unwrap();
insta::assert_json_snapshot!(sourcemap_contents, {
".**.filePaths" => insta::dynamic_redaction(|mut value, _path| {
let mut paths_count = 0;
match value {
Content::Seq(ref mut vec) => {
for path in vec.iter().map(|i| i.as_str().unwrap()) {
assert_eq!(fs_err::canonicalize(path).is_ok(), true, "path was not valid");
assert_eq!(Path::new(path).is_absolute(), true, "path was not absolute");
paths_count += 1;
}
}
_ => panic!("Expected filePaths to be a sequence"),
}
format!("[...{} path{} omitted...]", paths_count, if paths_count != 1 { "s" } else { "" } )
})
});
}
}

View File

@@ -54,6 +54,11 @@ pub struct SyncbackCommand {
/// If provided, the prompt for writing to the file system is skipped. /// If provided, the prompt for writing to the file system is skipped.
#[clap(long, short = 'y')] #[clap(long, short = 'y')]
pub non_interactive: bool, pub non_interactive: bool,
/// If provided, forces syncback to use JSON model files instead of binary
/// .rbxm files for instances that would otherwise serialize as binary.
#[clap(long)]
pub dangerously_force_json: bool,
} }
impl SyncbackCommand { impl SyncbackCommand {
@@ -73,7 +78,7 @@ impl SyncbackCommand {
vfs.set_watch_enabled(false); vfs.set_watch_enabled(false);
let project_start_timer = Instant::now(); let project_start_timer = Instant::now();
let session_old = ServeSession::new(vfs, path_old.clone())?; let session_old = ServeSession::new(vfs, path_old.clone(), None)?;
log::debug!( log::debug!(
"Finished opening project in {:0.02}s", "Finished opening project in {:0.02}s",
project_start_timer.elapsed().as_secs_f32() project_start_timer.elapsed().as_secs_f32()
@@ -104,6 +109,7 @@ impl SyncbackCommand {
&mut dom_old, &mut dom_old,
dom_new, dom_new,
session_old.root_project(), session_old.root_project(),
self.dangerously_force_json,
)?; )?;
log::debug!( log::debug!(
"Syncback finished in {:.02}s!", "Syncback finished in {:.02}s!",

View File

@@ -42,7 +42,7 @@ impl UploadCommand {
let vfs = Vfs::new_default(); let vfs = Vfs::new_default();
let session = ServeSession::new(vfs, project_path)?; let session = ServeSession::new(vfs, project_path, None)?;
let tree = session.tree(); let tree = session.tree();
let inner_tree = tree.inner(); let inner_tree = tree.inner();

380
src/git.rs Normal file
View File

@@ -0,0 +1,380 @@
//! Git integration for filtering files based on changes since a reference.
use std::{
collections::HashSet,
path::{Path, PathBuf},
process::Command,
sync::{Arc, RwLock},
};
use anyhow::{bail, Context};
/// A filter that tracks which files have been changed since a Git reference.
///
/// When active, only files that have been modified, added, or deleted according
/// to Git will be "acknowledged" and synced to Studio. This allows users to
/// work with large projects where they only want to sync their local changes.
///
/// Once a file is acknowledged (either initially or during the session), it
/// stays acknowledged for the entire session. This prevents files from being
/// deleted in Studio if their content is reverted to match the git reference.
#[derive(Debug)]
pub struct GitFilter {
/// The Git repository root directory.
repo_root: PathBuf,
/// The Git reference to compare against (e.g., "HEAD", "main", a commit hash).
base_ref: String,
/// Cache of paths that are currently different from the base ref according to git.
/// This is refreshed on every VFS event.
git_changed_paths: RwLock<HashSet<PathBuf>>,
/// Paths that have been acknowledged at any point during this session.
/// Once a path is added here, it stays acknowledged forever (for this session).
/// This prevents files from being deleted if their content is reverted.
session_acknowledged_paths: RwLock<HashSet<PathBuf>>,
}
impl GitFilter {
/// Creates a new GitFilter for the given repository root and base reference.
///
/// The `repo_root` should be the root of the Git repository (where .git is located).
/// The `base_ref` is the Git reference to compare against (e.g., "HEAD", "main").
/// The `project_path` is the path to the project being served - it will always be
/// acknowledged regardless of git status to ensure the project structure exists.
pub fn new(repo_root: PathBuf, base_ref: String, project_path: &Path) -> anyhow::Result<Self> {
let filter = Self {
repo_root,
base_ref,
git_changed_paths: RwLock::new(HashSet::new()),
session_acknowledged_paths: RwLock::new(HashSet::new()),
};
// Always acknowledge the project path and its directory so the project
// structure exists even when there are no git changes
filter.acknowledge_project_path(project_path);
// Initial refresh to populate the cache with git changes
filter.refresh()?;
Ok(filter)
}
/// Acknowledges the project path and its containing directory.
/// This ensures the project structure always exists regardless of git status.
fn acknowledge_project_path(&self, project_path: &Path) {
let mut session = self.session_acknowledged_paths.write().unwrap();
// Acknowledge the project path itself (might be a directory or .project.json file)
let canonical = project_path.canonicalize().unwrap_or_else(|_| project_path.to_path_buf());
session.insert(canonical.clone());
// Acknowledge all ancestor directories
let mut current = canonical.parent();
while let Some(parent) = current {
session.insert(parent.to_path_buf());
current = parent.parent();
}
// If it's a directory, also acknowledge default.project.json inside it
if project_path.is_dir() {
for name in &["default.project.json", "default.project.jsonc"] {
let project_file = project_path.join(name);
if let Ok(canonical_file) = project_file.canonicalize() {
session.insert(canonical_file);
} else {
session.insert(project_file);
}
}
}
// If it's a .project.json file, also acknowledge its parent directory
if let Some(parent) = project_path.parent() {
let parent_canonical = parent.canonicalize().unwrap_or_else(|_| parent.to_path_buf());
session.insert(parent_canonical);
}
log::debug!(
"GitFilter: acknowledged project path {} ({} paths total)",
project_path.display(),
session.len()
);
}
/// Finds the Git repository root for the given path.
pub fn find_repo_root(path: &Path) -> anyhow::Result<PathBuf> {
let output = Command::new("git")
.args(["rev-parse", "--show-toplevel"])
.current_dir(path)
.output()
.context("Failed to execute git rev-parse")?;
if !output.status.success() {
let stderr = String::from_utf8_lossy(&output.stderr);
bail!("Failed to find Git repository root: {}", stderr.trim());
}
let root = String::from_utf8_lossy(&output.stdout)
.trim()
.to_string();
Ok(PathBuf::from(root))
}
/// Refreshes the cache of acknowledged paths by querying Git.
///
/// This should be called when files change to ensure newly modified files
/// are properly acknowledged. Once a path is acknowledged, it stays
/// acknowledged for the entire session (even if the file is reverted).
pub fn refresh(&self) -> anyhow::Result<()> {
let mut git_changed = HashSet::new();
// Get files changed since the base ref (modified, added, deleted)
let diff_output = Command::new("git")
.args(["diff", "--name-only", &self.base_ref])
.current_dir(&self.repo_root)
.output()
.context("Failed to execute git diff")?;
if !diff_output.status.success() {
let stderr = String::from_utf8_lossy(&diff_output.stderr);
bail!("git diff failed: {}", stderr.trim());
}
let diff_files = String::from_utf8_lossy(&diff_output.stdout);
let diff_count = diff_files.lines().filter(|l| !l.is_empty()).count();
if diff_count > 0 {
log::debug!("git diff found {} changed files", diff_count);
}
for line in diff_files.lines() {
if !line.is_empty() {
let path = self.repo_root.join(line);
log::trace!("git diff: acknowledging {}", path.display());
self.acknowledge_path(&path, &mut git_changed);
}
}
// Get untracked files (new files not yet committed)
let untracked_output = Command::new("git")
.args(["ls-files", "--others", "--exclude-standard"])
.current_dir(&self.repo_root)
.output()
.context("Failed to execute git ls-files")?;
if !untracked_output.status.success() {
let stderr = String::from_utf8_lossy(&untracked_output.stderr);
bail!("git ls-files failed: {}", stderr.trim());
}
let untracked_files = String::from_utf8_lossy(&untracked_output.stdout);
for line in untracked_files.lines() {
if !line.is_empty() {
let path = self.repo_root.join(line);
self.acknowledge_path(&path, &mut git_changed);
}
}
// Get staged files (files added to index but not yet committed)
let staged_output = Command::new("git")
.args(["diff", "--name-only", "--cached", &self.base_ref])
.current_dir(&self.repo_root)
.output()
.context("Failed to execute git diff --cached")?;
if staged_output.status.success() {
let staged_files = String::from_utf8_lossy(&staged_output.stdout);
for line in staged_files.lines() {
if !line.is_empty() {
let path = self.repo_root.join(line);
self.acknowledge_path(&path, &mut git_changed);
}
}
}
// Update the git changed paths cache
{
let mut cache = self.git_changed_paths.write().unwrap();
*cache = git_changed.clone();
}
// Merge newly changed paths into session acknowledged paths
// Once acknowledged, a path stays acknowledged for the entire session
{
let mut session = self.session_acknowledged_paths.write().unwrap();
for path in git_changed {
session.insert(path);
}
log::debug!(
"GitFilter refreshed: {} paths acknowledged in session",
session.len()
);
}
Ok(())
}
/// Acknowledges a path and all its ancestors, plus associated meta files.
fn acknowledge_path(&self, path: &Path, acknowledged: &mut HashSet<PathBuf>) {
// Canonicalize the path if possible, otherwise use as-is
let path = path.canonicalize().unwrap_or_else(|_| path.to_path_buf());
// Add the path itself
acknowledged.insert(path.clone());
// Add all ancestor directories
let mut current = path.parent();
while let Some(parent) = current {
acknowledged.insert(parent.to_path_buf());
current = parent.parent();
}
// Add associated meta files
self.acknowledge_meta_files(&path, acknowledged);
}
/// Acknowledges associated meta files for a given path.
fn acknowledge_meta_files(&self, path: &Path, acknowledged: &mut HashSet<PathBuf>) {
if let Some(file_name) = path.file_name().and_then(|n| n.to_str()) {
if let Some(parent) = path.parent() {
// For a file like "foo.lua", also acknowledge "foo.meta.json"
// Strip known extensions to get the base name
let base_name = strip_lua_extension(file_name);
let meta_path = parent.join(format!("{}.meta.json", base_name));
if let Ok(canonical) = meta_path.canonicalize() {
acknowledged.insert(canonical);
} else {
acknowledged.insert(meta_path);
}
// For init files, also acknowledge "init.meta.json" in the same directory
if file_name.starts_with("init.") {
let init_meta = parent.join("init.meta.json");
if let Ok(canonical) = init_meta.canonicalize() {
acknowledged.insert(canonical);
} else {
acknowledged.insert(init_meta);
}
}
}
}
}
/// Checks if a path is acknowledged (should be synced).
///
/// Returns `true` if the path or any of its descendants have been changed
/// at any point during this session. Once a file is acknowledged, it stays
/// acknowledged even if its content is reverted to match the git reference.
pub fn is_acknowledged(&self, path: &Path) -> bool {
let session = self.session_acknowledged_paths.read().unwrap();
// Try to canonicalize the path
let canonical = path.canonicalize().unwrap_or_else(|_| path.to_path_buf());
// Check if this exact path is acknowledged
if session.contains(&canonical) {
log::trace!("Path {} is directly acknowledged", path.display());
return true;
}
// Also check without canonicalization in case of path differences
if session.contains(path) {
log::trace!("Path {} is acknowledged (non-canonical)", path.display());
return true;
}
// For directories, check if any descendant is acknowledged
// This is done by checking if any acknowledged path starts with this path
for acknowledged in session.iter() {
if acknowledged.starts_with(&canonical) {
log::trace!(
"Path {} has acknowledged descendant {}",
path.display(),
acknowledged.display()
);
return true;
}
// Also check non-canonical
if acknowledged.starts_with(path) {
log::trace!(
"Path {} has acknowledged descendant {} (non-canonical)",
path.display(),
acknowledged.display()
);
return true;
}
}
log::trace!(
"Path {} is NOT acknowledged (canonical: {})",
path.display(),
canonical.display()
);
false
}
/// Returns the base reference being compared against.
pub fn base_ref(&self) -> &str {
&self.base_ref
}
/// Returns the repository root path.
pub fn repo_root(&self) -> &Path {
&self.repo_root
}
/// Explicitly acknowledges a path and all its ancestors.
/// This is useful for ensuring certain paths are always synced regardless of git status.
pub fn force_acknowledge(&self, path: &Path) {
let mut acknowledged = HashSet::new();
self.acknowledge_path(path, &mut acknowledged);
let mut session = self.session_acknowledged_paths.write().unwrap();
for p in acknowledged {
session.insert(p);
}
}
}
/// Strips Lua-related extensions from a file name to get the base name.
fn strip_lua_extension(file_name: &str) -> &str {
const EXTENSIONS: &[&str] = &[
".server.luau",
".server.lua",
".client.luau",
".client.lua",
".luau",
".lua",
];
for ext in EXTENSIONS {
if let Some(base) = file_name.strip_suffix(ext) {
return base;
}
}
// If no Lua extension, try to strip the regular extension
file_name
.rsplit_once('.')
.map(|(base, _)| base)
.unwrap_or(file_name)
}
/// A wrapper around GitFilter that can be shared across threads.
pub type SharedGitFilter = Arc<GitFilter>;
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_strip_lua_extension() {
assert_eq!(strip_lua_extension("foo.server.lua"), "foo");
assert_eq!(strip_lua_extension("foo.client.luau"), "foo");
assert_eq!(strip_lua_extension("foo.lua"), "foo");
assert_eq!(strip_lua_extension("init.server.lua"), "init");
assert_eq!(strip_lua_extension("bar.txt"), "bar");
assert_eq!(strip_lua_extension("noextension"), "noextension");
}
}

View File

@@ -9,6 +9,7 @@ mod tree_view;
mod auth_cookie; mod auth_cookie;
mod change_processor; mod change_processor;
mod git;
mod glob; mod glob;
mod json; mod json;
mod lua_ast; mod lua_ast;
@@ -28,6 +29,7 @@ mod web;
// TODO: Work out what we should expose publicly // TODO: Work out what we should expose publicly
pub use git::{GitFilter, SharedGitFilter};
pub use project::*; pub use project::*;
pub use rojo_ref::*; pub use rojo_ref::*;
pub use session_id::SessionId; pub use session_id::SessionId;

View File

@@ -13,6 +13,7 @@ use thiserror::Error;
use crate::{ use crate::{
change_processor::ChangeProcessor, change_processor::ChangeProcessor,
git::SharedGitFilter,
message_queue::MessageQueue, message_queue::MessageQueue,
project::{Project, ProjectError}, project::{Project, ProjectError},
session_id::SessionId, session_id::SessionId,
@@ -94,7 +95,14 @@ impl ServeSession {
/// The project file is expected to be loaded out-of-band since it's /// The project file is expected to be loaded out-of-band since it's
/// currently loaded from the filesystem directly instead of through the /// currently loaded from the filesystem directly instead of through the
/// in-memory filesystem layer. /// in-memory filesystem layer.
pub fn new<P: AsRef<Path>>(vfs: Vfs, start_path: P) -> Result<Self, ServeSessionError> { ///
/// If `git_filter` is provided, only files that have changed since the
/// specified Git reference will be synced.
pub fn new<P: AsRef<Path>>(
vfs: Vfs,
start_path: P,
git_filter: Option<SharedGitFilter>,
) -> Result<Self, ServeSessionError> {
let start_path = start_path.as_ref(); let start_path = start_path.as_ref();
let start_time = Instant::now(); let start_time = Instant::now();
@@ -102,12 +110,28 @@ impl ServeSession {
let root_project = Project::load_initial_project(&vfs, start_path)?; let root_project = Project::load_initial_project(&vfs, start_path)?;
// If git filter is active, ensure the project file location is acknowledged
// This is necessary so the project structure exists even with no git changes
if let Some(ref filter) = git_filter {
filter.force_acknowledge(start_path);
filter.force_acknowledge(&root_project.file_location);
filter.force_acknowledge(root_project.folder_location());
log::debug!(
"Force acknowledged project at {}",
root_project.file_location.display()
);
}
let mut tree = RojoTree::new(InstanceSnapshot::new()); let mut tree = RojoTree::new(InstanceSnapshot::new());
let root_id = tree.get_root_id(); let root_id = tree.get_root_id();
let instance_context = let instance_context = match &git_filter {
InstanceContext::with_emit_legacy_scripts(root_project.emit_legacy_scripts); Some(filter) => {
InstanceContext::with_git_filter(root_project.emit_legacy_scripts, Arc::clone(filter))
}
None => InstanceContext::with_emit_legacy_scripts(root_project.emit_legacy_scripts),
};
log::trace!("Generating snapshot of instances from VFS"); log::trace!("Generating snapshot of instances from VFS");
let snapshot = snapshot_from_vfs(&instance_context, &vfs, start_path)?; let snapshot = snapshot_from_vfs(&instance_context, &vfs, start_path)?;
@@ -133,6 +157,7 @@ impl ServeSession {
Arc::clone(&vfs), Arc::clone(&vfs),
Arc::clone(&message_queue), Arc::clone(&message_queue),
tree_mutation_receiver, tree_mutation_receiver,
git_filter,
); );
Ok(Self { Ok(Self {

View File

@@ -8,6 +8,7 @@ use anyhow::Context;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use crate::{ use crate::{
git::SharedGitFilter,
glob::Glob, glob::Glob,
path_serializer, path_serializer,
project::ProjectNode, project::ProjectNode,
@@ -152,13 +153,27 @@ impl Default for InstanceMetadata {
} }
} }
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct InstanceContext { pub struct InstanceContext {
#[serde(skip_serializing_if = "Vec::is_empty")] #[serde(skip_serializing_if = "Vec::is_empty")]
pub path_ignore_rules: Arc<Vec<PathIgnoreRule>>, pub path_ignore_rules: Arc<Vec<PathIgnoreRule>>,
pub emit_legacy_scripts: bool, pub emit_legacy_scripts: bool,
#[serde(skip_serializing_if = "Vec::is_empty")] #[serde(skip_serializing_if = "Vec::is_empty")]
pub sync_rules: Vec<SyncRule>, pub sync_rules: Vec<SyncRule>,
/// Optional Git filter for --git-since mode. When set, only files that have
/// changed since the specified Git reference will be synced.
#[serde(skip)]
pub git_filter: Option<SharedGitFilter>,
}
impl PartialEq for InstanceContext {
fn eq(&self, other: &Self) -> bool {
// Note: git_filter is intentionally excluded from comparison
// since it's runtime state, not configuration
self.path_ignore_rules == other.path_ignore_rules
&& self.emit_legacy_scripts == other.emit_legacy_scripts
&& self.sync_rules == other.sync_rules
}
} }
impl InstanceContext { impl InstanceContext {
@@ -167,6 +182,7 @@ impl InstanceContext {
path_ignore_rules: Arc::new(Vec::new()), path_ignore_rules: Arc::new(Vec::new()),
emit_legacy_scripts: emit_legacy_scripts_default().unwrap(), emit_legacy_scripts: emit_legacy_scripts_default().unwrap(),
sync_rules: Vec::new(), sync_rules: Vec::new(),
git_filter: None,
} }
} }
@@ -179,6 +195,36 @@ impl InstanceContext {
} }
} }
/// Creates a new InstanceContext with a Git filter for --git-since mode.
pub fn with_git_filter(
emit_legacy_scripts: Option<bool>,
git_filter: SharedGitFilter,
) -> Self {
Self {
git_filter: Some(git_filter),
..Self::with_emit_legacy_scripts(emit_legacy_scripts)
}
}
/// Sets the Git filter for this context.
pub fn set_git_filter(&mut self, git_filter: Option<SharedGitFilter>) {
self.git_filter = git_filter;
}
/// Returns true if the given path should be acknowledged (synced).
/// If no git filter is set, all paths are acknowledged.
pub fn is_path_acknowledged(&self, path: &Path) -> bool {
match &self.git_filter {
Some(filter) => filter.is_acknowledged(path),
None => true,
}
}
/// Returns true if a git filter is active.
pub fn has_git_filter(&self) -> bool {
self.git_filter.is_some()
}
/// Extend the list of ignore rules in the context with the given new rules. /// Extend the list of ignore rules in the context with the given new rules.
pub fn add_path_ignore_rules<I>(&mut self, new_rules: I) pub fn add_path_ignore_rules<I>(&mut self, new_rules: I)
where where

View File

@@ -8,7 +8,7 @@ use rbx_dom_weak::{
ustr, HashMapExt as _, UstrMap, UstrSet, ustr, HashMapExt as _, UstrMap, UstrSet,
}; };
use crate::{RojoRef, REF_POINTER_ATTRIBUTE_PREFIX}; use crate::{variant_eq::variant_eq, RojoRef, REF_POINTER_ATTRIBUTE_PREFIX};
use super::{ use super::{
patch::{PatchAdd, PatchSet, PatchUpdate}, patch::{PatchAdd, PatchSet, PatchUpdate},
@@ -127,7 +127,7 @@ fn compute_property_patches(
match instance.properties().get(&name) { match instance.properties().get(&name) {
Some(instance_value) => { Some(instance_value) => {
if &snapshot_value != instance_value { if !variant_eq(&snapshot_value, instance_value) {
changed_properties.insert(name, Some(snapshot_value)); changed_properties.insert(name, Some(snapshot_value));
} }
} }

View File

@@ -109,8 +109,17 @@ pub fn syncback_csv<'sync>(
if !meta.is_empty() { if !meta.is_empty() {
let parent = snapshot.path.parent_err()?; let parent = snapshot.path.parent_err()?;
let meta_stem = snapshot.middleware
.and_then(|mw| {
let ext = format!(".{}", crate::syncback::extension_for_middleware(mw));
snapshot.path.file_name()
.and_then(|n| n.to_str())
.and_then(|s| s.strip_suffix(ext.as_str()))
.map(str::to_owned)
})
.unwrap_or_else(|| new_inst.name.clone());
fs_snapshot.add_file( fs_snapshot.add_file(
parent.join(format!("{}.meta.json", new_inst.name)), parent.join(format!("{meta_stem}.meta.json")),
serde_json::to_vec_pretty(&meta).context("cannot serialize metadata")?, serde_json::to_vec_pretty(&meta).context("cannot serialize metadata")?,
) )
} }

View File

@@ -8,7 +8,7 @@ use memofs::{DirEntry, Vfs};
use crate::{ use crate::{
snapshot::{InstanceContext, InstanceMetadata, InstanceSnapshot, InstigatingSource}, snapshot::{InstanceContext, InstanceMetadata, InstanceSnapshot, InstigatingSource},
syncback::{hash_instance, FsSnapshot, SyncbackReturn, SyncbackSnapshot}, syncback::{hash_instance, slugify_name, FsSnapshot, SyncbackReturn, SyncbackSnapshot},
}; };
use super::{meta_file::DirectoryMetadata, snapshot_from_vfs}; use super::{meta_file::DirectoryMetadata, snapshot_from_vfs};
@@ -134,12 +134,39 @@ pub fn syncback_dir_no_meta<'sync>(
let mut children = Vec::new(); let mut children = Vec::new();
let mut removed_children = Vec::new(); let mut removed_children = Vec::new();
// We have to enforce unique child names for the file system. // Build the old child map early so it can be used for deduplication below.
let mut child_names = HashSet::with_capacity(new_inst.children().len()); let mut old_child_map = HashMap::new();
if let Some(old_inst) = snapshot.old_inst() {
for child in old_inst.children() {
let inst = snapshot.get_old_instance(*child).unwrap();
old_child_map.insert(inst.name(), inst);
}
}
// Enforce unique filesystem names. Uses actual on-disk names for existing
// children and resolved names (with init-prefix) for new ones.
let mut fs_child_names = HashSet::with_capacity(new_inst.children().len());
let mut duplicate_set = HashSet::new(); let mut duplicate_set = HashSet::new();
for child_ref in new_inst.children() { for child_ref in new_inst.children() {
let child = snapshot.get_new_instance(*child_ref).unwrap(); let child = snapshot.get_new_instance(*child_ref).unwrap();
if !child_names.insert(child.name.to_lowercase()) { let fs_name = old_child_map
.get(child.name.as_str())
.and_then(|old| old.metadata().relevant_paths.first())
.and_then(|p| p.file_name())
.and_then(|n| n.to_str())
.map(|s| s.to_lowercase())
.unwrap_or_else(|| {
let slug = slugify_name(&child.name);
let slug_lower = slug.to_lowercase();
// Mirror name_for_inst's init-prefix.
if slug_lower == "init" {
format!("_{slug_lower}")
} else {
slug_lower
}
});
if !fs_child_names.insert(fs_name) {
duplicate_set.insert(child.name.as_str()); duplicate_set.insert(child.name.as_str());
} }
} }
@@ -153,13 +180,7 @@ pub fn syncback_dir_no_meta<'sync>(
anyhow::bail!("Instance has more than 25 children with duplicate names"); anyhow::bail!("Instance has more than 25 children with duplicate names");
} }
if let Some(old_inst) = snapshot.old_inst() { if snapshot.old_inst().is_some() {
let mut old_child_map = HashMap::with_capacity(old_inst.children().len());
for child in old_inst.children() {
let inst = snapshot.get_old_instance(*child).unwrap();
old_child_map.insert(inst.name(), inst);
}
for new_child_ref in new_inst.children() { for new_child_ref in new_inst.children() {
let new_child = snapshot.get_new_instance(*new_child_ref).unwrap(); let new_child = snapshot.get_new_instance(*new_child_ref).unwrap();
if let Some(old_child) = old_child_map.remove(new_child.name.as_str()) { if let Some(old_child) = old_child_map.remove(new_child.name.as_str()) {
@@ -225,6 +246,12 @@ pub fn syncback_dir_no_meta<'sync>(
mod test { mod test {
use super::*; use super::*;
use std::path::PathBuf;
use crate::{
snapshot::{InstanceMetadata, InstanceSnapshot},
Project, RojoTree, SyncbackData, SyncbackSnapshot,
};
use memofs::{InMemoryFs, VfsSnapshot}; use memofs::{InMemoryFs, VfsSnapshot};
#[test] #[test]
@@ -261,4 +288,237 @@ mod test {
insta::assert_yaml_snapshot!(instance_snapshot); insta::assert_yaml_snapshot!(instance_snapshot);
} }
fn make_project() -> Project {
serde_json::from_str(r#"{"tree": {"$className": "DataModel"}}"#).unwrap()
}
fn make_vfs() -> Vfs {
let mut imfs = InMemoryFs::new();
imfs.load_snapshot("/root", VfsSnapshot::empty_dir()).unwrap();
Vfs::new(imfs)
}
/// Two children whose Roblox names are identical when lowercased ("Alpha"
/// and "alpha") but live at different filesystem paths because of the
/// `name` property ("Beta/" and "Alpha/" respectively). The dedup check
/// must use the actual filesystem paths, not the raw Roblox names, to
/// avoid a false-positive duplicate error.
#[test]
fn syncback_no_false_duplicate_with_name_prop() {
use rbx_dom_weak::{InstanceBuilder, WeakDom};
// Old child A: Roblox name "Alpha", on disk at "/root/Beta"
// (name property maps "Alpha" → "Beta" on the filesystem)
let old_child_a = InstanceSnapshot::new()
.name("Alpha")
.class_name("Folder")
.metadata(
InstanceMetadata::new()
.instigating_source(PathBuf::from("/root/Beta"))
.relevant_paths(vec![PathBuf::from("/root/Beta")]),
);
// Old child B: Roblox name "alpha", on disk at "/root/Alpha"
let old_child_b = InstanceSnapshot::new()
.name("alpha")
.class_name("Folder")
.metadata(
InstanceMetadata::new()
.instigating_source(PathBuf::from("/root/Alpha"))
.relevant_paths(vec![PathBuf::from("/root/Alpha")]),
);
let old_parent = InstanceSnapshot::new()
.name("Parent")
.class_name("Folder")
.children(vec![old_child_a, old_child_b])
.metadata(
InstanceMetadata::new()
.instigating_source(PathBuf::from("/root"))
.relevant_paths(vec![PathBuf::from("/root")]),
);
let old_tree = RojoTree::new(old_parent);
// New state: same two children in Roblox.
let mut new_tree = WeakDom::new(InstanceBuilder::new("ROOT"));
let new_parent = new_tree.insert(
new_tree.root_ref(),
InstanceBuilder::new("Folder").with_name("Parent"),
);
new_tree.insert(new_parent, InstanceBuilder::new("Folder").with_name("Alpha"));
new_tree.insert(new_parent, InstanceBuilder::new("Folder").with_name("alpha"));
let vfs = make_vfs();
let project = make_project();
let data = SyncbackData::for_test(&vfs, &old_tree, &new_tree, &project);
let snapshot = SyncbackSnapshot {
data,
old: Some(old_tree.get_root_id()),
new: new_parent,
path: PathBuf::from("/root"),
middleware: None,
};
let result = syncback_dir_no_meta(&snapshot);
assert!(
result.is_ok(),
"should not error when two children have the same lowercased Roblox \
name but map to distinct filesystem paths: {result:?}",
);
}
/// Two completely new children with the same non-init name would produce
/// the same filesystem entry and must be detected as a duplicate.
#[test]
fn syncback_detects_sibling_duplicate_names() {
use rbx_dom_weak::{InstanceBuilder, WeakDom};
let old_parent = InstanceSnapshot::new()
.name("Parent")
.class_name("Folder")
.metadata(
InstanceMetadata::new()
.instigating_source(PathBuf::from("/root"))
.relevant_paths(vec![PathBuf::from("/root")]),
);
let old_tree = RojoTree::new(old_parent);
let mut new_tree = WeakDom::new(InstanceBuilder::new("ROOT"));
let new_parent = new_tree.insert(
new_tree.root_ref(),
InstanceBuilder::new("Folder").with_name("Parent"),
);
// "Foo" is not a reserved name but two siblings named "Foo" still
// collide on disk.
new_tree.insert(new_parent, InstanceBuilder::new("Folder").with_name("Foo"));
new_tree.insert(new_parent, InstanceBuilder::new("Folder").with_name("Foo"));
let vfs = make_vfs();
let project = make_project();
let data = SyncbackData::for_test(&vfs, &old_tree, &new_tree, &project);
let snapshot = SyncbackSnapshot {
data,
old: Some(old_tree.get_root_id()),
new: new_parent,
path: PathBuf::from("/root"),
middleware: None,
};
let result = syncback_dir_no_meta(&snapshot);
assert!(
result.is_err(),
"should error when two new children would produce the same filesystem name",
);
}
/// A new child named "Init" (as a ModuleScript) would naively become
/// "Init.luau", which case-insensitively matches the parent's reserved
/// "init.luau". Syncback must resolve this automatically by prefixing the
/// filesystem name with '_' (→ "_Init.luau") rather than erroring.
#[test]
fn syncback_resolves_init_name_conflict() {
use rbx_dom_weak::{InstanceBuilder, WeakDom};
let old_parent = InstanceSnapshot::new()
.name("Parent")
.class_name("Folder")
.metadata(
InstanceMetadata::new()
.instigating_source(PathBuf::from("/root"))
.relevant_paths(vec![PathBuf::from("/root")]),
);
let old_tree = RojoTree::new(old_parent);
let mut new_tree = WeakDom::new(InstanceBuilder::new("ROOT"));
let new_parent = new_tree.insert(
new_tree.root_ref(),
InstanceBuilder::new("Folder").with_name("Parent"),
);
new_tree.insert(
new_parent,
InstanceBuilder::new("ModuleScript").with_name("Init"),
);
let vfs = make_vfs();
let project = make_project();
let data = SyncbackData::for_test(&vfs, &old_tree, &new_tree, &project);
let snapshot = SyncbackSnapshot {
data,
old: Some(old_tree.get_root_id()),
new: new_parent,
path: PathBuf::from("/root"),
middleware: None,
};
let result = syncback_dir_no_meta(&snapshot);
assert!(
result.is_ok(),
"should resolve init-name conflict by prefixing '_', not error: {result:?}",
);
// The child should have been placed at "_Init.luau", not "Init.luau".
let child_file_name = result
.unwrap()
.children
.into_iter()
.next()
.and_then(|c| c.path.file_name().map(|n| n.to_string_lossy().into_owned()))
.unwrap_or_default();
assert!(
child_file_name.starts_with('_'),
"child filesystem name should start with '_' to avoid init collision, \
got: {child_file_name}",
);
}
/// A child whose filesystem name is stored with a slugified prefix (e.g.
/// "_Init") must NOT be blocked — only the bare "init" stem is reserved.
#[test]
fn syncback_allows_slugified_init_name() {
use rbx_dom_weak::{InstanceBuilder, WeakDom};
// Existing child: on disk as "_Init" (slugified from a name with an
// illegal character), its stem is "_init" which is not reserved.
let old_child = InstanceSnapshot::new()
.name("Init")
.class_name("Folder")
.metadata(
InstanceMetadata::new()
.instigating_source(PathBuf::from("/root/_Init"))
.relevant_paths(vec![PathBuf::from("/root/_Init")]),
);
let old_parent = InstanceSnapshot::new()
.name("Parent")
.class_name("Folder")
.children(vec![old_child])
.metadata(
InstanceMetadata::new()
.instigating_source(PathBuf::from("/root"))
.relevant_paths(vec![PathBuf::from("/root")]),
);
let old_tree = RojoTree::new(old_parent);
let mut new_tree = WeakDom::new(InstanceBuilder::new("ROOT"));
let new_parent = new_tree.insert(
new_tree.root_ref(),
InstanceBuilder::new("Folder").with_name("Parent"),
);
new_tree.insert(new_parent, InstanceBuilder::new("Folder").with_name("Init"));
let vfs = make_vfs();
let project = make_project();
let data = SyncbackData::for_test(&vfs, &old_tree, &new_tree, &project);
let snapshot = SyncbackSnapshot {
data,
old: Some(old_tree.get_root_id()),
new: new_parent,
path: PathBuf::from("/root"),
middleware: None,
};
let result = syncback_dir_no_meta(&snapshot);
assert!(
result.is_ok(),
"should allow a child whose filesystem name is slugified away from \
the reserved 'init' stem: {result:?}",
);
}
} }

View File

@@ -158,16 +158,17 @@ pub fn syncback_lua<'sync>(
if !meta.is_empty() { if !meta.is_empty() {
let parent_location = snapshot.path.parent_err()?; let parent_location = snapshot.path.parent_err()?;
let instance_name = &snapshot.new_inst().name; let meta_stem = snapshot.middleware
let slugified; .and_then(|mw| {
let meta_name = if crate::syncback::validate_file_name(instance_name).is_err() { let ext = format!(".{}", crate::syncback::extension_for_middleware(mw));
slugified = crate::syncback::slugify_name(instance_name); snapshot.path.file_name()
&slugified .and_then(|n| n.to_str())
} else { .and_then(|s| s.strip_suffix(ext.as_str()))
instance_name .map(str::to_owned)
}; })
.unwrap_or_else(|| snapshot.new_inst().name.clone());
fs_snapshot.add_file( fs_snapshot.add_file(
parent_location.join(format!("{}.meta.json", meta_name)), parent_location.join(format!("{meta_stem}.meta.json")),
serde_json::to_vec_pretty(&meta).context("cannot serialize metadata")?, serde_json::to_vec_pretty(&meta).context("cannot serialize metadata")?,
); );
} }

View File

@@ -154,11 +154,18 @@ impl AdjacentMetadata {
.old_inst() .old_inst()
.and_then(|inst| inst.metadata().specified_name.clone()) .and_then(|inst| inst.metadata().specified_name.clone())
.or_else(|| { .or_else(|| {
// If this is a new instance and its name is invalid for the filesystem, // Write name when the filesystem path doesn't match the
// we need to specify the name in meta.json so it can be preserved // instance name (invalid chars or init-prefix).
if snapshot.old_inst().is_none() { if snapshot.old_inst().is_none() {
let instance_name = &snapshot.new_inst().name; let instance_name = &snapshot.new_inst().name;
if validate_file_name(instance_name).is_err() { let fs_stem = path
.file_name()
.and_then(|n| n.to_str())
.map(|s| s.split('.').next().unwrap_or(s))
.unwrap_or("");
if validate_file_name(instance_name).is_err()
|| fs_stem != instance_name.as_str()
{
Some(instance_name.clone()) Some(instance_name.clone())
} else { } else {
None None
@@ -421,11 +428,17 @@ impl DirectoryMetadata {
.old_inst() .old_inst()
.and_then(|inst| inst.metadata().specified_name.clone()) .and_then(|inst| inst.metadata().specified_name.clone())
.or_else(|| { .or_else(|| {
// If this is a new instance and its name is invalid for the filesystem, // Write name when the directory name doesn't match the
// we need to specify the name in meta.json so it can be preserved // instance name (invalid chars or init-prefix).
if snapshot.old_inst().is_none() { if snapshot.old_inst().is_none() {
let instance_name = &snapshot.new_inst().name; let instance_name = &snapshot.new_inst().name;
if validate_file_name(instance_name).is_err() { let fs_name = path
.file_name()
.and_then(|n| n.to_str())
.unwrap_or("");
if validate_file_name(instance_name).is_err()
|| fs_name != instance_name.as_str()
{
Some(instance_name.clone()) Some(instance_name.clone())
} else { } else {
None None

View File

@@ -61,6 +61,10 @@ pub use self::{
/// This will inspect the path and find the appropriate middleware for it, /// This will inspect the path and find the appropriate middleware for it,
/// taking user-written rules into account. Then, it will attempt to convert /// taking user-written rules into account. Then, it will attempt to convert
/// the path into an InstanceSnapshot using that middleware. /// the path into an InstanceSnapshot using that middleware.
///
/// If a git filter is active in the context and the path is not acknowledged
/// (i.e., the file hasn't changed since the base git reference), this function
/// returns `Ok(None)` to skip syncing that file.
#[profiling::function] #[profiling::function]
pub fn snapshot_from_vfs( pub fn snapshot_from_vfs(
context: &InstanceContext, context: &InstanceContext,
@@ -72,6 +76,16 @@ pub fn snapshot_from_vfs(
None => return Ok(None), None => return Ok(None),
}; };
// Check if this path is acknowledged by the git filter.
// If not, skip this path entirely.
if !context.is_path_acknowledged(path) {
log::trace!(
"Skipping path {} (not acknowledged by git filter)",
path.display()
);
return Ok(None);
}
if meta.is_dir() { if meta.is_dir() {
let (middleware, dir_name, init_path) = get_dir_middleware(vfs, path)?; let (middleware, dir_name, init_path) = get_dir_middleware(vfs, path)?;
// TODO: Support user defined init paths // TODO: Support user defined init paths
@@ -213,6 +227,10 @@ pub enum Middleware {
impl Middleware { impl Middleware {
/// Creates a snapshot for the given path from the Middleware with /// Creates a snapshot for the given path from the Middleware with
/// the provided name. /// the provided name.
///
/// When a git filter is active in the context, `ignore_unknown_instances`
/// will be set to `true` on all generated snapshots to preserve descendants
/// in Studio that are not tracked by Rojo.
fn snapshot( fn snapshot(
&self, &self,
context: &InstanceContext, context: &InstanceContext,
@@ -262,6 +280,14 @@ impl Middleware {
}; };
if let Ok(Some(ref mut snapshot)) = output { if let Ok(Some(ref mut snapshot)) = output {
snapshot.metadata.middleware = Some(*self); snapshot.metadata.middleware = Some(*self);
// When git filter is active, force ignore_unknown_instances to true
// so that we don't delete children in Studio that aren't tracked.
if context.has_git_filter() {
snapshot.metadata.ignore_unknown_instances = true;
// Also apply this recursively to all children
set_ignore_unknown_instances_recursive(&mut snapshot.children);
}
} }
output output
} }
@@ -365,6 +391,16 @@ impl Middleware {
} }
} }
/// Recursively sets `ignore_unknown_instances` to `true` on all children.
/// This is used when git filter is active to ensure we don't delete
/// children in Studio that aren't tracked by Rojo.
fn set_ignore_unknown_instances_recursive(children: &mut [InstanceSnapshot]) {
for child in children {
child.metadata.ignore_unknown_instances = true;
set_ignore_unknown_instances_recursive(&mut child.children);
}
}
/// A helper for easily defining a SyncRule. Arguments are passed literally /// A helper for easily defining a SyncRule. Arguments are passed literally
/// to this macro in the order `include`, `middleware`, `suffix`, /// to this macro in the order `include`, `middleware`, `suffix`,
/// and `exclude`. Both `suffix` and `exclude` are optional. /// and `exclude`. Both `suffix` and `exclude` are optional.

View File

@@ -83,6 +83,19 @@ pub fn snapshot_project(
// file being updated. // file being updated.
snapshot.metadata.relevant_paths.push(path.to_path_buf()); snapshot.metadata.relevant_paths.push(path.to_path_buf());
// When git filter is active, also register the project folder as a
// relevant path. This serves as a catch-all so that file changes
// not under any specific $path node can still walk up the directory
// tree and trigger a re-snapshot of the entire project.
if context.has_git_filter() {
if let Some(folder) = path.parent() {
let normalized = vfs
.canonicalize(folder)
.unwrap_or_else(|_| folder.to_path_buf());
snapshot.metadata.relevant_paths.push(normalized);
}
}
Ok(Some(snapshot)) Ok(Some(snapshot))
} }
None => Ok(None), None => Ok(None),
@@ -137,6 +150,26 @@ pub fn snapshot_project_node(
// Take the snapshot's metadata as-is, which will be mutated later // Take the snapshot's metadata as-is, which will be mutated later
// on. // on.
metadata = snapshot.metadata; metadata = snapshot.metadata;
} else if context.has_git_filter() {
// When the git filter is active and the $path was filtered out
// (no acknowledged files yet), we still need to register the path
// in relevant_paths. This allows the change processor to map file
// changes in this directory back to this project node instance,
// triggering a re-snapshot that will pick up newly modified files.
let normalized = vfs
.canonicalize(full_path.as_ref())
.unwrap_or_else(|_| full_path.to_path_buf());
metadata.relevant_paths.push(normalized);
// The VFS only sets up file watches via read() and read_dir(),
// not via metadata(). Since the git filter caused snapshot_from_vfs
// to return early (before read_dir was called), the VFS is not
// watching this path. We must read the directory here to ensure
// the VFS sets up a recursive watch, otherwise file change events
// will never fire and live sync won't detect modifications.
if full_path.is_dir() {
let _ = vfs.read_dir(&full_path);
}
} }
} }
@@ -192,6 +225,17 @@ pub fn snapshot_project_node(
} }
(_, None, _, Some(PathNode::Required(path))) => { (_, None, _, Some(PathNode::Required(path))) => {
// If git filter is active and the path was filtered out, treat it
// as if the path was optional and skip this node.
if context.has_git_filter() {
log::trace!(
"Skipping project node '{}' because its path was filtered by git filter: {}",
instance_name,
path.display()
);
return Ok(None);
}
anyhow::bail!( anyhow::bail!(
"Rojo project referred to a file using $path that could not be turned into a Roblox Instance by Rojo.\n\ "Rojo project referred to a file using $path that could not be turned into a Roblox Instance by Rojo.\n\
Check that the file exists and is a file type known by Rojo.\n\ Check that the file exists and is a file type known by Rojo.\n\
@@ -282,7 +326,12 @@ pub fn snapshot_project_node(
// If the user didn't specify it AND $path was not specified (meaning // If the user didn't specify it AND $path was not specified (meaning
// there's no existing value we'd be stepping on from a project file or meta // there's no existing value we'd be stepping on from a project file or meta
// file), set it to true. // file), set it to true.
if let Some(ignore) = node.ignore_unknown_instances { //
// When git filter is active, always set to true to preserve descendants
// in Studio that are not tracked by Rojo.
if context.has_git_filter() {
metadata.ignore_unknown_instances = true;
} else if let Some(ignore) = node.ignore_unknown_instances {
metadata.ignore_unknown_instances = ignore; metadata.ignore_unknown_instances = ignore;
} else if node.path.is_none() { } else if node.path.is_none() {
// TODO: Introduce a strict mode where $ignoreUnknownInstances is never // TODO: Introduce a strict mode where $ignoreUnknownInstances is never

View File

@@ -58,8 +58,17 @@ pub fn syncback_txt<'sync>(
if !meta.is_empty() { if !meta.is_empty() {
let parent = snapshot.path.parent_err()?; let parent = snapshot.path.parent_err()?;
let meta_stem = snapshot.middleware
.and_then(|mw| {
let ext = format!(".{}", crate::syncback::extension_for_middleware(mw));
snapshot.path.file_name()
.and_then(|n| n.to_str())
.and_then(|s| s.strip_suffix(ext.as_str()))
.map(str::to_owned)
})
.unwrap_or_else(|| new_inst.name.clone());
fs_snapshot.add_file( fs_snapshot.add_file(
parent.join(format!("{}.meta.json", new_inst.name)), parent.join(format!("{meta_stem}.meta.json")),
serde_json::to_vec_pretty(&meta).context("could not serialize metadata")?, serde_json::to_vec_pretty(&meta).context("could not serialize metadata")?,
); );
} }

View File

@@ -36,23 +36,33 @@ pub fn name_for_inst<'a>(
| Middleware::ServerScriptDir | Middleware::ServerScriptDir
| Middleware::ClientScriptDir | Middleware::ClientScriptDir
| Middleware::ModuleScriptDir => { | Middleware::ModuleScriptDir => {
if validate_file_name(&new_inst.name).is_err() { let name = if validate_file_name(&new_inst.name).is_err() {
Cow::Owned(slugify_name(&new_inst.name)) Cow::Owned(slugify_name(&new_inst.name))
} else { } else {
Cow::Borrowed(&new_inst.name) Cow::Borrowed(new_inst.name.as_str())
};
// Prefix "init" to avoid colliding with reserved init files.
if name.to_lowercase() == "init" {
Cow::Owned(format!("_{name}"))
} else {
name
} }
} }
_ => { _ => {
let extension = extension_for_middleware(middleware); let extension = extension_for_middleware(middleware);
let slugified; let slugified;
let final_name = if validate_file_name(&new_inst.name).is_err() { let stem: &str = if validate_file_name(&new_inst.name).is_err() {
slugified = slugify_name(&new_inst.name); slugified = slugify_name(&new_inst.name);
&slugified &slugified
} else { } else {
&new_inst.name &new_inst.name
}; };
// Prefix "init" stems to avoid colliding with reserved init files.
Cow::Owned(format!("{final_name}.{extension}")) if stem.to_lowercase() == "init" {
Cow::Owned(format!("_{stem}.{extension}"))
} else {
Cow::Owned(format!("{stem}.{extension}"))
}
} }
}) })
} }

View File

@@ -52,6 +52,7 @@ pub fn syncback_loop(
old_tree: &mut RojoTree, old_tree: &mut RojoTree,
mut new_tree: WeakDom, mut new_tree: WeakDom,
project: &Project, project: &Project,
force_json: bool,
) -> anyhow::Result<FsSnapshot> { ) -> anyhow::Result<FsSnapshot> {
let ignore_patterns = project let ignore_patterns = project
.syncback_rules .syncback_rules
@@ -153,6 +154,7 @@ pub fn syncback_loop(
old_tree, old_tree,
new_tree: &new_tree, new_tree: &new_tree,
project, project,
force_json,
}; };
let mut snapshots = vec![SyncbackSnapshot { let mut snapshots = vec![SyncbackSnapshot {
@@ -197,7 +199,7 @@ pub fn syncback_loop(
} }
} }
let middleware = get_best_middleware(&snapshot); let middleware = get_best_middleware(&snapshot, force_json);
log::trace!( log::trace!(
"Middleware for {inst_path} is {:?} (path is {})", "Middleware for {inst_path} is {:?} (path is {})",
@@ -213,10 +215,14 @@ pub fn syncback_loop(
let syncback = match middleware.syncback(&snapshot) { let syncback = match middleware.syncback(&snapshot) {
Ok(syncback) => syncback, Ok(syncback) => syncback,
Err(err) if middleware == Middleware::Dir => { Err(err) if middleware == Middleware::Dir => {
let new_middleware = match env::var(DEBUG_MODEL_FORMAT_VAR) { let new_middleware = if force_json {
Ok(value) if value == "1" => Middleware::Rbxmx, Middleware::JsonModel
Ok(value) if value == "2" => Middleware::JsonModel, } else {
_ => Middleware::Rbxm, match env::var(DEBUG_MODEL_FORMAT_VAR) {
Ok(value) if value == "1" => Middleware::Rbxmx,
Ok(value) if value == "2" => Middleware::JsonModel,
_ => Middleware::Rbxm,
}
}; };
let file_name = snapshot let file_name = snapshot
.path .path
@@ -295,12 +301,13 @@ pub struct SyncbackReturn<'sync> {
pub removed_children: Vec<InstanceWithMeta<'sync>>, pub removed_children: Vec<InstanceWithMeta<'sync>>,
} }
pub fn get_best_middleware(snapshot: &SyncbackSnapshot) -> Middleware { pub fn get_best_middleware(snapshot: &SyncbackSnapshot, force_json: bool) -> Middleware {
// At some point, we're better off using an O(1) method for checking // At some point, we're better off using an O(1) method for checking
// equality for classes like this. // equality for classes like this.
static JSON_MODEL_CLASSES: OnceLock<HashSet<&str>> = OnceLock::new(); static JSON_MODEL_CLASSES: OnceLock<HashSet<&str>> = OnceLock::new();
let json_model_classes = JSON_MODEL_CLASSES.get_or_init(|| { let json_model_classes = JSON_MODEL_CLASSES.get_or_init(|| {
[ [
"Actor",
"Sound", "Sound",
"SoundGroup", "SoundGroup",
"Sky", "Sky",
@@ -318,6 +325,11 @@ pub fn get_best_middleware(snapshot: &SyncbackSnapshot) -> Middleware {
"ChatInputBarConfiguration", "ChatInputBarConfiguration",
"BubbleChatConfiguration", "BubbleChatConfiguration",
"ChannelTabsConfiguration", "ChannelTabsConfiguration",
"RemoteEvent",
"UnreliableRemoteEvent",
"RemoteFunction",
"BindableEvent",
"BindableFunction",
] ]
.into() .into()
}); });
@@ -361,10 +373,18 @@ pub fn get_best_middleware(snapshot: &SyncbackSnapshot) -> Middleware {
} }
if middleware == Middleware::Rbxm { if middleware == Middleware::Rbxm {
middleware = match env::var(DEBUG_MODEL_FORMAT_VAR) { middleware = if force_json {
Ok(value) if value == "1" => Middleware::Rbxmx, if !inst.children().is_empty() {
Ok(value) if value == "2" => Middleware::JsonModel, Middleware::Dir
_ => Middleware::Rbxm, } else {
Middleware::JsonModel
}
} else {
match env::var(DEBUG_MODEL_FORMAT_VAR) {
Ok(value) if value == "1" => Middleware::Rbxmx,
Ok(value) if value == "2" => Middleware::JsonModel,
_ => Middleware::Rbxm,
}
} }
} }

View File

@@ -20,6 +20,7 @@ pub struct SyncbackData<'sync> {
pub(super) old_tree: &'sync RojoTree, pub(super) old_tree: &'sync RojoTree,
pub(super) new_tree: &'sync WeakDom, pub(super) new_tree: &'sync WeakDom,
pub(super) project: &'sync Project, pub(super) project: &'sync Project,
pub(super) force_json: bool,
} }
pub struct SyncbackSnapshot<'sync> { pub struct SyncbackSnapshot<'sync> {
@@ -43,7 +44,7 @@ impl<'sync> SyncbackSnapshot<'sync> {
path: PathBuf::new(), path: PathBuf::new(),
middleware: None, middleware: None,
}; };
let middleware = get_best_middleware(&snapshot); let middleware = get_best_middleware(&snapshot, self.data.force_json);
let name = name_for_inst(middleware, snapshot.new_inst(), snapshot.old_inst())?; let name = name_for_inst(middleware, snapshot.new_inst(), snapshot.old_inst())?;
snapshot.path = self.path.join(name.as_ref()); snapshot.path = self.path.join(name.as_ref());
@@ -69,7 +70,7 @@ impl<'sync> SyncbackSnapshot<'sync> {
path: PathBuf::new(), path: PathBuf::new(),
middleware: None, middleware: None,
}; };
let middleware = get_best_middleware(&snapshot); let middleware = get_best_middleware(&snapshot, self.data.force_json);
let name = name_for_inst(middleware, snapshot.new_inst(), snapshot.old_inst())?; let name = name_for_inst(middleware, snapshot.new_inst(), snapshot.old_inst())?;
snapshot.path = base_path.join(name.as_ref()); snapshot.path = base_path.join(name.as_ref());
@@ -237,6 +238,24 @@ pub fn inst_path(dom: &WeakDom, referent: Ref) -> String {
path.join("/") path.join("/")
} }
impl<'sync> SyncbackData<'sync> {
/// Constructs a `SyncbackData` for use in unit tests.
#[cfg(test)]
pub fn for_test(
vfs: &'sync Vfs,
old_tree: &'sync RojoTree,
new_tree: &'sync WeakDom,
project: &'sync Project,
) -> Self {
Self {
vfs,
old_tree,
new_tree,
project,
}
}
}
#[cfg(test)] #[cfg(test)]
mod test { mod test {
use rbx_dom_weak::{InstanceBuilder, WeakDom}; use rbx_dom_weak::{InstanceBuilder, WeakDom};

View File

@@ -13,7 +13,6 @@ use rbx_dom_weak::{
}; };
use crate::{ use crate::{
json,
serve_session::ServeSession, serve_session::ServeSession,
snapshot::{InstanceWithMeta, PatchSet, PatchUpdate}, snapshot::{InstanceWithMeta, PatchSet, PatchUpdate},
web::{ web::{
@@ -22,11 +21,10 @@ use crate::{
ServerInfoResponse, SocketPacket, SocketPacketBody, SocketPacketType, SubscribeMessage, ServerInfoResponse, SocketPacket, SocketPacketBody, SocketPacketType, SubscribeMessage,
WriteRequest, WriteResponse, PROTOCOL_VERSION, SERVER_VERSION, WriteRequest, WriteResponse, PROTOCOL_VERSION, SERVER_VERSION,
}, },
util::{json, json_ok}, util::{deserialize_msgpack, msgpack, msgpack_ok, serialize_msgpack},
}, },
web_api::{ web_api::{
BufferEncode, InstanceUpdate, RefPatchRequest, RefPatchResponse, SerializeRequest, InstanceUpdate, RefPatchRequest, RefPatchResponse, SerializeRequest, SerializeResponse,
SerializeResponse,
}, },
}; };
@@ -42,7 +40,7 @@ pub async fn call(serve_session: Arc<ServeSession>, mut request: Request<Body>)
if is_upgrade_request(&request) { if is_upgrade_request(&request) {
service.handle_api_socket(&mut request).await service.handle_api_socket(&mut request).await
} else { } else {
json( msgpack(
ErrorResponse::bad_request( ErrorResponse::bad_request(
"/api/socket must be called as a websocket upgrade request", "/api/socket must be called as a websocket upgrade request",
), ),
@@ -58,7 +56,7 @@ pub async fn call(serve_session: Arc<ServeSession>, mut request: Request<Body>)
} }
(&Method::POST, "/api/write") => service.handle_api_write(request).await, (&Method::POST, "/api/write") => service.handle_api_write(request).await,
(_method, path) => json( (_method, path) => msgpack(
ErrorResponse::not_found(format!("Route not found: {}", path)), ErrorResponse::not_found(format!("Route not found: {}", path)),
StatusCode::NOT_FOUND, StatusCode::NOT_FOUND,
), ),
@@ -79,7 +77,7 @@ impl ApiService {
let tree = self.serve_session.tree(); let tree = self.serve_session.tree();
let root_instance_id = tree.get_root_id(); let root_instance_id = tree.get_root_id();
json_ok(&ServerInfoResponse { msgpack_ok(&ServerInfoResponse {
server_version: SERVER_VERSION.to_owned(), server_version: SERVER_VERSION.to_owned(),
protocol_version: PROTOCOL_VERSION, protocol_version: PROTOCOL_VERSION,
session_id: self.serve_session.session_id(), session_id: self.serve_session.session_id(),
@@ -98,7 +96,7 @@ impl ApiService {
let input_cursor: u32 = match argument.parse() { let input_cursor: u32 = match argument.parse() {
Ok(v) => v, Ok(v) => v,
Err(err) => { Err(err) => {
return json( return msgpack(
ErrorResponse::bad_request(format!("Malformed message cursor: {}", err)), ErrorResponse::bad_request(format!("Malformed message cursor: {}", err)),
StatusCode::BAD_REQUEST, StatusCode::BAD_REQUEST,
); );
@@ -109,7 +107,7 @@ impl ApiService {
let (response, websocket) = match upgrade(request, None) { let (response, websocket) = match upgrade(request, None) {
Ok(result) => result, Ok(result) => result,
Err(err) => { Err(err) => {
return json( return msgpack(
ErrorResponse::internal_error(format!("WebSocket upgrade failed: {}", err)), ErrorResponse::internal_error(format!("WebSocket upgrade failed: {}", err)),
StatusCode::INTERNAL_SERVER_ERROR, StatusCode::INTERNAL_SERVER_ERROR,
); );
@@ -136,10 +134,10 @@ impl ApiService {
let body = body::to_bytes(request.into_body()).await.unwrap(); let body = body::to_bytes(request.into_body()).await.unwrap();
let request: WriteRequest = match json::from_slice(&body) { let request: WriteRequest = match deserialize_msgpack(&body) {
Ok(request) => request, Ok(request) => request,
Err(err) => { Err(err) => {
return json( return msgpack(
ErrorResponse::bad_request(format!("Invalid body: {}", err)), ErrorResponse::bad_request(format!("Invalid body: {}", err)),
StatusCode::BAD_REQUEST, StatusCode::BAD_REQUEST,
); );
@@ -147,7 +145,7 @@ impl ApiService {
}; };
if request.session_id != session_id { if request.session_id != session_id {
return json( return msgpack(
ErrorResponse::bad_request("Wrong session ID"), ErrorResponse::bad_request("Wrong session ID"),
StatusCode::BAD_REQUEST, StatusCode::BAD_REQUEST,
); );
@@ -173,7 +171,7 @@ impl ApiService {
}) })
.unwrap(); .unwrap();
json_ok(WriteResponse { session_id }) msgpack_ok(WriteResponse { session_id })
} }
async fn handle_api_read(&self, request: Request<Body>) -> Response<Body> { async fn handle_api_read(&self, request: Request<Body>) -> Response<Body> {
@@ -183,7 +181,7 @@ impl ApiService {
let requested_ids = match requested_ids { let requested_ids = match requested_ids {
Ok(ids) => ids, Ok(ids) => ids,
Err(_) => { Err(_) => {
return json( return msgpack(
ErrorResponse::bad_request("Malformed ID list"), ErrorResponse::bad_request("Malformed ID list"),
StatusCode::BAD_REQUEST, StatusCode::BAD_REQUEST,
); );
@@ -207,7 +205,7 @@ impl ApiService {
} }
} }
json_ok(ReadResponse { msgpack_ok(ReadResponse {
session_id: self.serve_session.session_id(), session_id: self.serve_session.session_id(),
message_cursor, message_cursor,
instances, instances,
@@ -225,10 +223,10 @@ impl ApiService {
let session_id = self.serve_session.session_id(); let session_id = self.serve_session.session_id();
let body = body::to_bytes(request.into_body()).await.unwrap(); let body = body::to_bytes(request.into_body()).await.unwrap();
let request: SerializeRequest = match json::from_slice(&body) { let request: SerializeRequest = match deserialize_msgpack(&body) {
Ok(request) => request, Ok(request) => request,
Err(err) => { Err(err) => {
return json( return msgpack(
ErrorResponse::bad_request(format!("Invalid body: {}", err)), ErrorResponse::bad_request(format!("Invalid body: {}", err)),
StatusCode::BAD_REQUEST, StatusCode::BAD_REQUEST,
); );
@@ -236,7 +234,7 @@ impl ApiService {
}; };
if request.session_id != session_id { if request.session_id != session_id {
return json( return msgpack(
ErrorResponse::bad_request("Wrong session ID"), ErrorResponse::bad_request("Wrong session ID"),
StatusCode::BAD_REQUEST, StatusCode::BAD_REQUEST,
); );
@@ -269,7 +267,7 @@ impl ApiService {
response_dom.transfer_within(child_ref, object_value); response_dom.transfer_within(child_ref, object_value);
} else { } else {
json( msgpack(
ErrorResponse::bad_request(format!("provided id {id} is not in the tree")), ErrorResponse::bad_request(format!("provided id {id} is not in the tree")),
StatusCode::BAD_REQUEST, StatusCode::BAD_REQUEST,
); );
@@ -280,9 +278,9 @@ impl ApiService {
let mut source = Vec::new(); let mut source = Vec::new();
rbx_binary::to_writer(&mut source, &response_dom, &[response_dom.root_ref()]).unwrap(); rbx_binary::to_writer(&mut source, &response_dom, &[response_dom.root_ref()]).unwrap();
json_ok(SerializeResponse { msgpack_ok(SerializeResponse {
session_id: self.serve_session.session_id(), session_id: self.serve_session.session_id(),
model_contents: BufferEncode::new(source), model_contents: source,
}) })
} }
@@ -294,10 +292,10 @@ impl ApiService {
let session_id = self.serve_session.session_id(); let session_id = self.serve_session.session_id();
let body = body::to_bytes(request.into_body()).await.unwrap(); let body = body::to_bytes(request.into_body()).await.unwrap();
let request: RefPatchRequest = match json::from_slice(&body) { let request: RefPatchRequest = match deserialize_msgpack(&body) {
Ok(request) => request, Ok(request) => request,
Err(err) => { Err(err) => {
return json( return msgpack(
ErrorResponse::bad_request(format!("Invalid body: {}", err)), ErrorResponse::bad_request(format!("Invalid body: {}", err)),
StatusCode::BAD_REQUEST, StatusCode::BAD_REQUEST,
); );
@@ -305,7 +303,7 @@ impl ApiService {
}; };
if request.session_id != session_id { if request.session_id != session_id {
return json( return msgpack(
ErrorResponse::bad_request("Wrong session ID"), ErrorResponse::bad_request("Wrong session ID"),
StatusCode::BAD_REQUEST, StatusCode::BAD_REQUEST,
); );
@@ -338,7 +336,7 @@ impl ApiService {
} }
} }
json_ok(RefPatchResponse { msgpack_ok(RefPatchResponse {
session_id: self.serve_session.session_id(), session_id: self.serve_session.session_id(),
patch: SubscribeMessage { patch: SubscribeMessage {
added: HashMap::new(), added: HashMap::new(),
@@ -354,7 +352,7 @@ impl ApiService {
let requested_id = match Ref::from_str(argument) { let requested_id = match Ref::from_str(argument) {
Ok(id) => id, Ok(id) => id,
Err(_) => { Err(_) => {
return json( return msgpack(
ErrorResponse::bad_request("Invalid instance ID"), ErrorResponse::bad_request("Invalid instance ID"),
StatusCode::BAD_REQUEST, StatusCode::BAD_REQUEST,
); );
@@ -366,7 +364,7 @@ impl ApiService {
let instance = match tree.get_instance(requested_id) { let instance = match tree.get_instance(requested_id) {
Some(instance) => instance, Some(instance) => instance,
None => { None => {
return json( return msgpack(
ErrorResponse::bad_request("Instance not found"), ErrorResponse::bad_request("Instance not found"),
StatusCode::NOT_FOUND, StatusCode::NOT_FOUND,
); );
@@ -376,7 +374,7 @@ impl ApiService {
let script_path = match pick_script_path(instance) { let script_path = match pick_script_path(instance) {
Some(path) => path, Some(path) => path,
None => { None => {
return json( return msgpack(
ErrorResponse::bad_request( ErrorResponse::bad_request(
"No appropriate file could be found to open this script", "No appropriate file could be found to open this script",
), ),
@@ -389,7 +387,7 @@ impl ApiService {
Ok(()) => {} Ok(()) => {}
Err(error) => match error { Err(error) => match error {
OpenError::Io(io_error) => { OpenError::Io(io_error) => {
return json( return msgpack(
ErrorResponse::internal_error(format!( ErrorResponse::internal_error(format!(
"Attempting to open {} failed because of the following io error: {}", "Attempting to open {} failed because of the following io error: {}",
script_path.display(), script_path.display(),
@@ -403,7 +401,7 @@ impl ApiService {
status, status,
stderr, stderr,
} => { } => {
return json( return msgpack(
ErrorResponse::internal_error(format!( ErrorResponse::internal_error(format!(
r#"The command '{}' to open '{}' failed with the error code '{}'. r#"The command '{}' to open '{}' failed with the error code '{}'.
Error logs: Error logs:
@@ -419,7 +417,7 @@ impl ApiService {
}, },
}; };
json_ok(OpenResponse { msgpack_ok(OpenResponse {
session_id: self.serve_session.session_id(), session_id: self.serve_session.session_id(),
}) })
} }
@@ -483,7 +481,7 @@ async fn handle_websocket_subscription(
match result { match result {
Ok((new_cursor, messages)) => { Ok((new_cursor, messages)) => {
if !messages.is_empty() { if !messages.is_empty() {
let json_message = { let msgpack_message = {
let tree = tree_handle.lock().unwrap(); let tree = tree_handle.lock().unwrap();
let api_messages = messages let api_messages = messages
.into_iter() .into_iter()
@@ -499,12 +497,12 @@ async fn handle_websocket_subscription(
}), }),
}; };
serde_json::to_string(&response)? serialize_msgpack(response)?
}; };
log::debug!("Sending batch of messages over WebSocket subscription"); log::debug!("Sending batch of messages over WebSocket subscription");
if websocket.send(Message::Text(json_message)).await.is_err() { if websocket.send(Message::Binary(msgpack_message)).await.is_err() {
// Client disconnected // Client disconnected
log::debug!("WebSocket subscription closed by client"); log::debug!("WebSocket subscription closed by client");
break; break;

View File

@@ -249,31 +249,8 @@ pub struct SerializeRequest {
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct SerializeResponse { pub struct SerializeResponse {
pub session_id: SessionId, pub session_id: SessionId,
pub model_contents: BufferEncode, #[serde(with = "serde_bytes")]
} pub model_contents: Vec<u8>,
/// Using this struct we can force Roblox to JSONDecode this as a buffer.
/// This is what Roblox's serde APIs use, so it saves a step in the plugin.
#[derive(Debug, Serialize, Deserialize)]
pub struct BufferEncode {
m: (),
t: Cow<'static, str>,
base64: String,
}
impl BufferEncode {
pub fn new(content: Vec<u8>) -> Self {
let base64 = data_encoding::BASE64.encode(&content);
Self {
m: (),
t: Cow::Borrowed("buffer"),
base64,
}
}
pub fn model(&self) -> &str {
&self.base64
}
} }
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]

View File

@@ -1,8 +1,48 @@
use hyper::{header::CONTENT_TYPE, Body, Response, StatusCode}; use hyper::{header::CONTENT_TYPE, Body, Response, StatusCode};
use serde::Serialize; use serde::{Deserialize, Serialize};
pub fn json_ok<T: Serialize>(value: T) -> Response<Body> { pub fn msgpack_ok<T: Serialize>(value: T) -> Response<Body> {
json(value, StatusCode::OK) msgpack(value, StatusCode::OK)
}
pub fn msgpack<T: Serialize>(value: T, code: StatusCode) -> Response<Body> {
let mut serialized = Vec::new();
let mut serializer = rmp_serde::Serializer::new(&mut serialized)
.with_human_readable()
.with_struct_map();
if let Err(err) = value.serialize(&mut serializer) {
return Response::builder()
.status(StatusCode::INTERNAL_SERVER_ERROR)
.header(CONTENT_TYPE, "text/plain")
.body(Body::from(err.to_string()))
.unwrap();
};
Response::builder()
.status(code)
.header(CONTENT_TYPE, "application/msgpack")
.body(Body::from(serialized))
.unwrap()
}
pub fn serialize_msgpack<T: Serialize>(value: T) -> anyhow::Result<Vec<u8>> {
let mut serialized = Vec::new();
let mut serializer = rmp_serde::Serializer::new(&mut serialized)
.with_human_readable()
.with_struct_map();
value.serialize(&mut serializer)?;
Ok(serialized)
}
pub fn deserialize_msgpack<'a, T: Deserialize<'a>>(
input: &'a [u8],
) -> Result<T, rmp_serde::decode::Error> {
let mut deserializer = rmp_serde::Deserializer::new(input).with_human_readable();
T::deserialize(&mut deserializer)
} }
pub fn json<T: Serialize>(value: T, code: StatusCode) -> Response<Body> { pub fn json<T: Serialize>(value: T, code: StatusCode) -> Response<Body> {

View File

@@ -0,0 +1,14 @@
{
"name": "default",
"tree": {
"$className": "DataModel",
"ReplicatedStorage": {
"Project": {
"$path": "project/src",
"Module": {
"$path": "module"
}
}
}
}
}

View File

@@ -0,0 +1 @@
return nil

View File

@@ -0,0 +1,14 @@
{
"name": "default",
"tree": {
"$className": "DataModel",
"ReplicatedStorage": {
"Project": {
"$path": "src/",
"Module": {
"$path": "../module"
}
}
}
}
}

View File

@@ -0,0 +1 @@
return nil

View File

@@ -10,6 +10,7 @@ use std::{
use hyper_tungstenite::tungstenite::{connect, Message}; use hyper_tungstenite::tungstenite::{connect, Message};
use rbx_dom_weak::types::Ref; use rbx_dom_weak::types::Ref;
use serde::{Deserialize, Serialize};
use tempfile::{tempdir, TempDir}; use tempfile::{tempdir, TempDir};
use librojo::{ use librojo::{
@@ -161,22 +162,16 @@ impl TestServeSession {
pub fn get_api_rojo(&self) -> Result<ServerInfoResponse, reqwest::Error> { pub fn get_api_rojo(&self) -> Result<ServerInfoResponse, reqwest::Error> {
let url = format!("http://localhost:{}/api/rojo", self.port); let url = format!("http://localhost:{}/api/rojo", self.port);
let body = reqwest::blocking::get(url)?.text()?; let body = reqwest::blocking::get(url)?.bytes()?;
let value = jsonc_parser::parse_to_serde_value(&body, &Default::default()) Ok(deserialize_msgpack(&body).expect("Server returned malformed response"))
.expect("Failed to parse JSON")
.expect("No JSON value");
Ok(serde_json::from_value(value).expect("Server returned malformed response"))
} }
pub fn get_api_read(&self, id: Ref) -> Result<ReadResponse<'_>, reqwest::Error> { pub fn get_api_read(&self, id: Ref) -> Result<ReadResponse<'_>, reqwest::Error> {
let url = format!("http://localhost:{}/api/read/{}", self.port, id); let url = format!("http://localhost:{}/api/read/{}", self.port, id);
let body = reqwest::blocking::get(url)?.text()?; let body = reqwest::blocking::get(url)?.bytes()?;
let value = jsonc_parser::parse_to_serde_value(&body, &Default::default()) Ok(deserialize_msgpack(&body).expect("Server returned malformed response"))
.expect("Failed to parse JSON")
.expect("No JSON value");
Ok(serde_json::from_value(value).expect("Server returned malformed response"))
} }
pub fn get_api_socket_packet( pub fn get_api_socket_packet(
@@ -198,8 +193,8 @@ impl TestServeSession {
} }
match socket.read() { match socket.read() {
Ok(Message::Text(text)) => { Ok(Message::Binary(binary)) => {
let packet: SocketPacket = serde_json::from_str(&text)?; let packet: SocketPacket = deserialize_msgpack(&binary)?;
if packet.packet_type != packet_type { if packet.packet_type != packet_type {
continue; continue;
} }
@@ -212,7 +207,7 @@ impl TestServeSession {
return Err("WebSocket closed before receiving messages".into()); return Err("WebSocket closed before receiving messages".into());
} }
Ok(_) => { Ok(_) => {
// Ignore other message types (ping, pong, binary) // Ignore other message types (ping, pong, text)
continue; continue;
} }
Err(hyper_tungstenite::tungstenite::Error::Io(e)) Err(hyper_tungstenite::tungstenite::Error::Io(e))
@@ -236,15 +231,37 @@ impl TestServeSession {
) -> Result<SerializeResponse, reqwest::Error> { ) -> Result<SerializeResponse, reqwest::Error> {
let client = reqwest::blocking::Client::new(); let client = reqwest::blocking::Client::new();
let url = format!("http://localhost:{}/api/serialize", self.port); let url = format!("http://localhost:{}/api/serialize", self.port);
let body = serde_json::to_string(&SerializeRequest { let body = serialize_msgpack(&SerializeRequest {
session_id, session_id,
ids: ids.to_vec(), ids: ids.to_vec(),
}); })
.unwrap();
client.post(url).body((body).unwrap()).send()?.json() let body = client.post(url).body(body).send()?.bytes()?;
Ok(deserialize_msgpack(&body).expect("Server returned malformed response"))
} }
} }
fn serialize_msgpack<T: Serialize>(value: T) -> Result<Vec<u8>, rmp_serde::encode::Error> {
let mut serialized = Vec::new();
let mut serializer = rmp_serde::Serializer::new(&mut serialized)
.with_human_readable()
.with_struct_map();
value.serialize(&mut serializer)?;
Ok(serialized)
}
fn deserialize_msgpack<'a, T: Deserialize<'a>>(
input: &'a [u8],
) -> Result<T, rmp_serde::decode::Error> {
let mut deserializer = rmp_serde::Deserializer::new(input).with_human_readable();
T::deserialize(&mut deserializer)
}
/// Probably-okay way to generate random enough port numbers for running the /// Probably-okay way to generate random enough port numbers for running the
/// Rojo live server. /// Rojo live server.
/// ///
@@ -262,11 +279,7 @@ fn get_port_number() -> usize {
/// Since the provided structure intentionally includes unredacted referents, /// Since the provided structure intentionally includes unredacted referents,
/// some post-processing is done to ensure they don't show up in the model. /// some post-processing is done to ensure they don't show up in the model.
pub fn serialize_to_xml_model(response: &SerializeResponse, redactions: &RedactionMap) -> String { pub fn serialize_to_xml_model(response: &SerializeResponse, redactions: &RedactionMap) -> String {
let model_content = data_encoding::BASE64 let mut dom = rbx_binary::from_reader(response.model_contents.as_slice()).unwrap();
.decode(response.model_contents.model().as_bytes())
.unwrap();
let mut dom = rbx_binary::from_reader(model_content.as_slice()).unwrap();
// This makes me realize that maybe we need a `descendants_mut` iter. // This makes me realize that maybe we need a `descendants_mut` iter.
let ref_list: Vec<Ref> = dom.descendants().map(|inst| inst.referent()).collect(); let ref_list: Vec<Ref> = dom.descendants().map(|inst| inst.referent()).collect();
for referent in ref_list { for referent in ref_list {