Comments (65)
@neerajsu is the ResolvingSchemaWalker still available? I'm not able to find it in the repository. Any idea if it has been removed with more recent version changes? The latest version I can find mention of the ResolvingSchemaWalker is 1.1.8 but the latest version still available on maven central is 1.5.
My comment is almost 2 years old. At that time, ResolvingSchemaWalker was experimental. So it's probably renamed and/or incorporated into main code. You'll have to dig into the latest source code to figure out. Im sure the functionality exists in the latest code.
from json-schema-validator.
If I understand correctly, say you have:
{
"type": "array",
"items": { "$ref": "foo://bar#" }
}
and at JSON Reference foo://bar#
you have:
{
"type": "string",
"minLength": 2
}
then you would like to have:
{
"type": "array",
"items": {
"type": "string",
"minLength": 2
}
}
and this, recursively?
from json-schema-validator.
Yes, that is exactly right. Is this possible with the current API? (I'm using version 2.0.0)
from json-schema-validator.
Well, by writing a processor, yes it is. However, it is quite delicate. Consider this schema:
{
"type": "items",
"minItems": 2,
"maxItems": 2,
"items": { "oneOf": [ { "type": "integer" }, { "$ref": "#" } ]
}
This is, by essence, a recursive schema -- it would lead to an infinite loop on expansion.
More generally, such a processor should fail if a resolved JSON Reference is contained within the schema, and the pointer in the schema at which this reference was found is a child of the pointer of the reference. Oh, and there is the case of enum
for which we must not expand references.
And that is not all: if we expand a draft v4 schema, we don't want a draft v3 schema to come into the picture either.
As to other cases:
- reference loops, dangling references etc are handled by
RefResolver
; - syntax validation is handled by a
SyntaxProcessor
; SchemaTree
equivalence is handled bySchemaTreeEquivalence.getInstance()
.
Such a processor is writable, but not easy!
from json-schema-validator.
Hmmno, this is not quite right, SchemaTreeEquivalence
doesn't fit the bill since it checks for the current resolution context whereas here what is needed is equivalence of the loading URI and base node. So another equivalence needs to be written, but this is the easy part.
from json-schema-validator.
Anyway, I think this is an interesting thing to have, so I'll give a go at it -- but not immediately: I have an Avro converter to write!
from json-schema-validator.
There is also the solution that you give a go at it, of course. In this case, if you have questions, do not hesitate to ask ;)
from json-schema-validator.
Thanks! This api is new to me but I'll give it a shot.
from json-schema-validator.
You will need to have a good shot at the API of json-schema-core as well since this is what you will use to build your chains:
http://fge.github.com/json-schema-core/stable/index.html
In particular, look at the processing
package.
from json-schema-validator.
fge,
Perhaps I'm going about this the wrong way, but I basically need to "walk" the schema the same way that ValidationProcessor "walks" the schema. I see that you use the ArraySchemaSelector and ObjectSchemaSelector as helpers to cache/lookup things. Is there a reason that ObjectSchemaSelector, ObjectSchemaDigester, and ArraySchemaDigester are public, but ArraySchemaSelector has package access?
from json-schema-validator.
Uhm, that is a visibility bug... The initial plan was to have them all package visible!
But now that you mention it, maybe they should not... Your need here, and another one I will have in the near future, will require that I walk a JSON Schema as well (note that syntax validation also walks it in some way), so maybe this could be factorized away and reused for both syntax and instance validation, and for your use case.
Out of curiosity, how did you plan to use these selectors? And would you mind sending a pull request making ArraySchemaSelector public? For the general case I'll have to think it out some more.
from json-schema-validator.
Basically, I am doing deserialization/serialization of proprietary formats. Internally, we want our data representation and parsing to be very flexible.
For example, if I have a json schema:
{
"type" : "object",
"properties" : {
"name" : {
"type" : "string",
"required" : true
}
"age" : {
"type" : "number",
"required" : false
}
}
}
I can write a generic serializer/deserializer from this schema. I will "walk" this schema, then I can know that the first field is called "name" and the 2nd field is called "age", and also validate that the data received is in the appropriate format (i.e. required, string, date, number etc). Of course, any property can be a reference to another schema if the format is very complex.
When we need to adjust the schema, then all it will take is extending/modifying the json schema and our parsing logic can remain intact. Doing this with POJO's is too difficult to maintain. The way you are walking the schema and json value during validation is exactly what I need to do. But instead of a json value, I will be walking a proprietary serialization format.
from json-schema-validator.
What do you mean by proprietary format? A "proprietary JSON Schema"?
If yes, there may be another solution: write a processor which converts this to a JSON Schema, and if some constraints are not enforceable by existing keywords, you can create your own.
This is what I am currently doing with Avro Schema (I am writing an Avro to JSON Schema translator at the moment).
from json-schema-validator.
No, this is not a proprietary json schema, this is a format such as CSV, XML, fixed width, flat file, or some other proprietary format based on raw bytes.
from json-schema-validator.
OK, and you need to "flatten" JSON Schemas for your particular use case?
The more I think of it, the more I think what is needed is this:
- generalize the
KeywordValidator
interface, so that its first and third argument can change; - generalize
ValidationProcessor
to make its walking logic available to other processors.
Comments?
from json-schema-validator.
My original work around was to create a flattened/resolved json schema, and stuff it into a JsonNode that I could walk.
After looking at the way ValidationProcessor is written, I think that your approach is much better. Making the walking logic generic would essentially remove the need for me to create a flattened json schema. I would basically take a set of bytes, and break it up into sub segments as I traversed down into the subfields of the JSON schema. Of course, as I walked through the bytes and schema, I would be building up a value to return to the user of my custom processor.
Theoretically, if the behavior was pulled out of ValidationProcessor, I could choose to walk the schema and build the flattened version, or walk the schema and parse my payload directly.
from json-schema-validator.
OK, there is something to account for: as you may have seen, the logic in ValidationProcessor is driven by the data -- and, specifically, JSON, and even more specifically, Jackson's JsonNode
.
But if I understand correctly, you want other types than JSON to be handled? Or do you convert your data to JSON before processing?
from json-schema-validator.
I did notice that it was driven by the data. I was able to put together a Processor that mimicks the behavior of the ValidationProcessor, but instead of walking the data, it walks the schema while resolving references. I have two major things to figure out now.
- if I'm walking the schema, is it possible to "break up" my data in the appropriate way for the recursive call?
- how do I generate a JsonNode and build it up while I walk my schema?
from json-schema-validator.
The more I think about it, the more difficult I think it will be to break the coupling between walking the schema and walking my data. The knowledge on how to "walk" the schema lives in the process, processArray, and processObject methods. These methods also need to contain customized logic on how to break down a chunk of data into the appropriate sub components. The decision on how to break up the chunks can be anywhere between commas (CSV files), to a field on the payload telling you how many chunks follow, and how long each one is.
from json-schema-validator.
As to your first questions:
- 1: is it very hard to do so. Consider, for instance, that several schemas can apply to a same object member. Take this schema:
{
"properties": { "p9": {} },
"patternProperties": { "^p": {}, "\\d+$": {} }
}
If you have a member with name "p9", it will have to be valid against all three schemas. Which means you will end up breaking the data into its individual components anyway, and be driven by the data...
-
2: I have created a limited-purpose class for that in my Avro to JSON Schema processor (which I have just completed), but the real deal will be when I "do" to Jackson what I promised its author to do: apply the freeze/thaw pattern to
JsonNode
. At the moment however, and which is what I use, all types of nodes can be created using aJsonNodeFactory
. There is one provided byJacksonUtils.nodeFactory()
. Here is the very limited purpose class that I have created:
The main difference with, say, JsonTree
and SchemaTree
is that where both of these return a JsonNode
, in MutableTree
they return an ObjectNode
-- and as such all mutation methods of ObjectNode
are available.
With future Jackson, it will be possible to write something like this:
final JsonNode newNode = node.thaw().put("foo", "bar").etc().etc().freeze();
I plan to extend MutableTree
, probably making a Frozen
/Thawed
pair (not unlike the code above, in fact!), because I think it can be very useful. That would go in -core, of course.
Now, to your second comment: yes, I detected that difficulty as well. What would be needed is a generic way to walk the data. Not impossible, mind. After all, this is part of the plan for Jackson as well, with a beefed up TreeNode
. But basically, it means being able to break up data the way JSON is "broken up": null, boolean, string, number, array and object.
(why does github mess numbered list items?)
from json-schema-validator.
Hello again,
Is the source code of your processor available somewhere, or does it "touch private matters" already? I'd like to see how you did it, I must say I lack inspiration to get started for generalizing ValidationProcessor
.
from json-schema-validator.
Here is a link to the basic stripped down Validator. I called it ParserProcessor.
- This version walks the schema and not the data
- I haven't figured out the best way to break down the data yet
- I haven't figured out the best way to collect a return value yet
This works for the ObjectNodes, but I haven't tested ArrayNodes yet.
Let me know if you have any suggestions or if I'm making any heinous mistakes here.
Thanks!
from json-schema-validator.
OK, a couple of remarks:
null
as a first argument to the ref resolver processoris, uhm ;) You can use aDevNullReport
if you don't care about the messages, but sinceRefResolver
only throws exceptions, see below;- since you only care about the schema, you can use a
SchemaHolder
instead of aFullData
; - instead of using
System.out.println()
, you can usereport.debug()
and initialize aConsoleProcessingReport
withLogLevel.DEBUG
as the log level -- this report implementation usesSystem.out.println()
; - line 110 (I need to fix that in the code, too): use
JsonPointer.of(index)
instead -- this is the only place where I forgot this! - line 138:
JsonPointer.of("properties", field);
; - I wouldn't use the digesters if I were you, you are missing some subschemas. For instance, if you have this:
{
"additionalProperties": { "$ref": "some://where" }
}
the digester will not tell you with the current code. In fact, you not only need to walk properties
but also additionalProperties
and patternProperties
.
Anyway: as I need such a walker for what I am going to do next (JSON Schema to Avro), I'm going to have a go at it too. And I have thought about it more again: maybe what is more suited is the way SyntaxProcessor
works: SyntaxChecker
s already recursively scan schemas and only schemas.
from json-schema-validator.
OK, look at the commit referenced above: it contains the general idea.
If you look at, for instance, DraftV4PropertiesSyntaxChecker
, you will see that it collects pointers in the Collection<JsonPointer>
so that the SyntaxProcessor
which called it can process these pointers in the tree afterwards.
And this is the general idea of walking here: all syntax checkers have the logic in place (and tested), that just needs to be made more generic. The KeywordWalkers
will collect pointers so that the SchemaWalker
can process them, and one possible processing is to substitute $ref
;)
There can be many other uses for this. I just have to think a little more about how to make this really generic.
from json-schema-validator.
Thanks! I'll take a look and try the SyntaxProcessor/SchemaWalker way of doing things.
from json-schema-validator.
In fact I'm already on it ;)
I have transformed it so as to have a pure walker at first. Hold your breath for a couple of minutes and I should have a first version of it in wording order real soon.
from json-schema-validator.
OK, I'll hang on ;)
from json-schema-validator.
Mind helping me a little? Won't be that hard, but it needs to be done ;) If you are willing, I'll explain you what to do. It is really not hard.
from json-schema-validator.
Sure, what do you need?
from json-schema-validator.
Do you see the commits I have done so far testing collectors? The principle is as follows: all keywords which can contain subschemas need to be tested. The way of recognizing such keywords is to see, in src/test/resources/syntax
, all JSON test files where there are pointerTests
entries: these are the keywords which need to be tested.
The process is as follows:
- copy the file under the same directory structure (for instance
draftv4/allOf.json
) fromsyntax
towalk
, modify the file so that thepointerTests
array remains alone; - create the corresponding test under the appropriate directory;
- run it, check that it fails;
- create the corresponding
PointerCollector
in the source; - modify the appropriate dictionary (for instance,
DraftV4PointerCollectorDictionary
to add the keyword and collector; - run the test again, check that is succeeds.
It is really fast, since all the test infrastructure/code infrastructure etc is already created. Note how AbstractPointerCollector
is done: you can use getNode(tree)
to get the node for this keyword, and basePointer
contains a JsonPointer
which you only need to .append()
to to build the pointer to add.
If you wish, you can attack draft v4, I do draft v3 ;) Note: I have just finished dependencies
, which is in fact in common
.
from json-schema-validator.
Note: I'll do items
since it is also common to both drafts.
from json-schema-validator.
OK, I will give it a shot for v4
from json-schema-validator.
Some of the files did not contain any pointerTests. Should I delete these files? or leave them as empty?
The files currently look like:
{
}
from json-schema-validator.
Oh, that's true.
Just ignore them.
By the way, "properties" also goes in the common section, so no need to worry about it either.
from json-schema-validator.
it looks like you have dependencies in common. should I ignore this one too? Additionally, your common properties does not have the "pointerTests" and just starts with the array. Should I do the same? my .json files look like...
{
"pointerTests": [
{
"schema": {
"dependencies": {
"b": {},
"a++": {},
"c": null
}
},
"pointers": [
"/dependencies/a++",
"/dependencies/b"
]
}
]
}
from json-schema-validator.
Ah, yes, it is only the array which needs to be copied over, not the object.
And yes, "dependencies" is in common/.
In fact, the only keywords you need to care about now are "anyOf", "allOf", "oneOf", "not" and "definitions". The first three can share a common base class (for instance SchemaArrayPointerCollector
) and the last one can reuse SchemaMapPointerCollector
which is also used for patternProperties
and properties
.
Right now I am testing the core mechanics of SchemaWalker
itself -- this also needs to be done ;)
from json-schema-validator.
OK,
I have all the files fixed, and I'm going through the PointerCollector collect method logic. I have definitions working, but I'm struggling with "not".
According the JsonSchema.org "latest specification"
5.5.6. not
5.5.6.1. Valid values
This keyword's value MUST be an object. This object MUST be a valid JSON Schema.
So I should check if the tree node is an object to add the basepointer. However this fails your first test case since "inMyLife" is a string, not an object.
[
{
"schema": { "not": "inMyLife" },
"pointers": [ "/not" ]
},
{
"schema": { "not": {} },
"pointers": [ "/not" ]
}
]
from json-schema-validator.
All the tests now pass. Once you let me know what you want to do with "not", I'll get the pull request ready.
Note:
I did not make Dependencies or PatternProperties extend SchemaMapPointerCollector. I can do that also, but I didn't want to stomp on any changes you were making in the common package.
from json-schema-validator.
As to not
, it is quite simple:
pointers.add(basePointer);
Its argument is just a schema.
from json-schema-validator.
Oh, I see what you mean wrt not
.
As I wrote these tests for syntax validation to begin with, the pointer was always appended: it was up to syntax validation (when entering validate()
to not go any further since it tested that the node was an object.
But for SchemaWalker
, the schema will be valid -- that is a prerequisite. You can just remove the inMyLife
test.
from json-schema-validator.
OK, I am doing a first version of RefExpander
.
Note: I think I'll move the code into -core, since it will ultimately be a general-purpose walking mechanism -- and you can update the -core dependency independently. In the meanwhile, if you can test with your own source code, this will be branch walk
of my repo.
I may also change some things after I have written the first version. For instance, I think the .doProcess()
method will change for it to be more generic. But, as I said, right now, a first version.
from json-schema-validator.
OK, I have a first version. But it is butt ugly. It works. But it's ugly. And I know how I can make it break quite easily.
I need to have a mutable tree that I fill on the go, this version is real crap. But... Well... For simple cases like the one I wrote, it works OK...
from json-schema-validator.
OK, I need to think about it some more.
The problem is not with the logic of SchemaWalker
and PointerCollector
, it is pretty sound. The problem is plugging in whatever work is needed. And I think a Processor
is not the way to do it.
I'll think about it some more, right now I need sleep ;) But basically it is needed that we pass a mutable object to process all along the chain, and process it when we walk. .walk()
can stay, but .processCurrent
certainly needs to be given the boot for something better.
If you think of a design and have some time ahead of you, I'm open to ideas!
from json-schema-validator.
OK, I have a plan. First, schema walking will be split in two: one walking strategy will not resolve the refs, the other will.
Then there will be an interface:
public interface SchemaListener
{
void onWalk(final SchemaTree tree);
void onTreeChange(final SchemaTree oldTree, final SchemaTree newTree);
}
The first will be called each time the walk
function is entered; the second will be changed each time the current schema tree changes due to ref resolving. Of course, when not resolving refs, the second method will never be triggered.
I'll implement this and let you know how it went.
from json-schema-validator.
OK, good news, I have a fairly complete working schema walker, with associate listeners. I could implement schema substitution the way you initially asked for, and this will also help me for Avro, so it is close to being done.
You talked about other uses for this, I'd be curious to know them?
Note that the interface is not finalized yet, I need to find better names, document etc.
from json-schema-validator.
OK, the walk branch is now obsolete, the code has been merged into the master
branch. Note, however: work is currently in progress to make this code part of -core, like I hinted earlier.
from json-schema-validator.
Thanks! I will take a look.
Basically here are my use cases:
- Walk the schema or data to unmarshall a payload of bytes into a Json Object that will pass the JSON schema.
- Walk the schema or data to marshall a Json Object back into bytes.
- Be able to load schemas and their references from any URI
- Maintain schemas in an in memory dictionary/library that is loaded only once.
- There will be multiple versions of the same schema (i.e. version 1 through 5 of schema A)
from json-schema-validator.
OK, that makes things more clear, and I have some questions ;)
As to point 1, this unmarshalling can be done with a separate processor, which means only -core is needed, right? What is left to do is to build the appropriate inputs for -validator to operate; what is more, you say "schema or data": if it is a schema only, what about the data? If it is the data, what about the schema? See below for more, however.
As to point 2: am I correct in assuming this is why you needed ref resolved (for the schema)?
As to point 3: SchemaLoader
provides everything you need here, since you can support any URI scheme, redirect URIs, preload schemas and so on; however, it is not publicly documented as being a feature, since its primary use at the moment is to be used by a RefResolver
to resolve references. Do I understand you want this to be more "public" so as to provide SchemaTree
s?
As to point 4: here again, SchemaLoader
has what it takes. And, by the way, all this is done via a LoadingConfiguration
.
And I don't quite understand point 5?
from json-schema-validator.
And here is the "below for more". I have the intent to provide, in -core, a mechanism to fuse the output of two processors into the input for another:
public interface ProcessorJoiner<OUT1, OUT2, IN>
{
IN join(final OUT1 out1, final OUT2 out2);
}
and the same for split. In your use case, this could be used, for instance, to plug a processor producing a ValueHolder<JsonTree>
from a binary source, join in to a ValueHolder<SchemaTree>
, and making a FullData
. I still have to work out the details however.
from json-schema-validator.
Note: I have just committed the removal of the walking mechanism from -validator, it is now in -core.
Which means I'll continue work there. The discussion can go on in this issue however.
from json-schema-validator.
Comments on your comments:
(response to 1, including the reference to below): I think that would work well, since one processor will walk the schema, and one custom processor will have logic to walk the data, and the output of both of these values will be sent to "join" for processing. The result can be collected in the return value. In some cases, such as walking an array, the same schema will be passed with each value in the array.
(response to 2): The refs will need to be resolved for marshalling and unmashalling the bytes. I will need refs resolved in both cases, and also if my "walking" is driven by the schema or the data.
(response to 3): I think in some cases, it will be extremely valuable to me to directly access the SchemaTree of any SchemaNode. I could then actually store more metadata in the JSON Schema, and access it through the exposed SchemaTree. It will definitely give me more flexibility and allow me to ask questions about the schema without having to traverse the entire thing.
(response to 5): For my protocol specifications, I could potentially have the following:
{
"title":"person v1",
"type":"object",
"properties": {
"name" : {
"type": "string",
"required":true
}
}
}
And also
{
"title":"person v2",
"type":"object",
"properties": {
"name" : {
"type": "string",
"required":true
},
"age" : {
"type": "number"
}
}
}
My parsing logic will first have to inspect the data to determine if I should parse the payload with v1 or v2. Then I will lookup "person v2" from some dictionary, so that I can parse the payload with the proper schema. This is really just a namespace issue and I think this is already supported.
from json-schema-validator.
(what do you call SchemaNode
? It was there in 1.x but does not exist in 2.x anymore) I am not sure here, do you want to actually extend BaseSchemaTree
to include arbitrary metadata?
From your latter point, if I understand you correctly, you need to be able to tell apart what (un)marshalling process(or) will be used according to a defined field in the JSON representation of the data? I understood at first that you wanted to have two different schemas at the same URI. This is not the same thing ;)
This problem has already been expressed by @dportabella in another issue, and @joelittlejohn has also weighed in, since his project has the ability to generate a JSON Schema out of a POJO. But at a first step, have you looked at ProcessorSelector
? It is part of the answer here since you can select what processor is to be called next according to an arbitrary predicate. See this link and search for the ProcessorSelector
section:
https://github.com/fge/json-schema-core/wiki/Architecture
Or, if you can extract it more simply, use a ProcessorMap
(this is what I do to determine what validation process should take place according to the detected schema version -- see SchemaValidator
or ValidationProcessor
).
What you would need is a predicate on the data here, so that the correct processing be selected afterwards. Which means processor join is not really relevant to your case, is it?
from json-schema-validator.
I shouldn't need to extend any SchemaTree. I believe I can leverage defined JSON Schema features to get what I need here.
I think we are on the same page. Different "versions" of my messages will be at different URI's.
Oh, I have not read up on ProcessorMap yet. Let me dig into this a bit and I'll get back to you to see if this works. I probably won't be able to work through an example until Monday though.
from json-schema-validator.
I guess I need to know what you call "defined JSON Schema features" here -- none exist that can tell one "serialized" type from another one, unless you choose to interpret an existing keyword in your own way. But then remember that you can add your own keywords. Can you explain more?
from json-schema-validator.
OK, I have a working, tested implementation of Schemawalker -- one which resolves references, the other one which doesn't. I just need to make a processor out of it, document everything, and it'll be done.
As to the ref expander itself, I think I'll put it in json-schema-processor-examples
, you will be able to pick it up from there.
from json-schema-validator.
-core has been released with SchemaWalker
in it. There are two of them: the first does not try to resolve JSON References while the second does. There is an associated Processor
as well ;)
I'll illustrate how it works in json-schema-processor-examples, starting with a ref expander which will return a schema with all JSON References resolved. But that is not really needed anymore now ;)
from json-schema-validator.
OK, -core 1.0.2, and -validator 2.0.1 have been released. Note: syntax validation and ref resolving have moved into -core. All the schema walking tree is into core.
As to ref expanding, there is an example here:
However, since there is a ResolvingSchemaWalker
which does ref expansion for you, maybe you don't need it anymore...
from json-schema-validator.
Thanks for the updates! I got held up on some other tasks today, but I will work this in and let you know how it goes soon.
from json-schema-validator.
So far so good on using the ResolvingSchemaWalker! I also noticed that the json schema throws warnings on unsupported keys. Warnings are perfect since it doesn't prevent the schema from being validated, and I can use unsupported keys as ways to mark-up the schema for my marshalling/unmarshalling logic.
I will mark this as closed for now and I will create another thread if I have any questions regarding the walkers.
Thanks very much for your help!
from json-schema-validator.
I can't see RefExpander anymore in the master tree? Has it been removed?
from json-schema-validator.
I have the same requirement, is it a release feature to print a complete schema?
from json-schema-validator.
I have the same requirement, is it a release feature to print a complete schema?
Have the same question also, did you get an answer on this?
from json-schema-validator.
I think where they landed, after reading the discussion is that you need to implement your own as a Schema walker. I didn't see them merge anything into the tree to do it. Happy to take a default implementation of it if any of you do it.
from json-schema-validator.
@olgabo , @queenaqian , @simmosn , @huggsboson
@fge actually provided ResolvingSchemaWalker which does everything what the OP needs out of the box. It took me to while to understand by looking at the source code, but it is very straightforward. Here's the code snippet for you to use.
URL url = classLoader.getResource("/some/schema/file/path/in/resource");
JsonNode jsonNode = JsonLoader.fromURL(url);
SchemaTree schemaTree = new CanonicalSchemaTree(jsonNode);
ResolvingSchemaWalker resolvingSchemaWalker = new ResolvingSchemaWalker(schemaTree);
SchemaExpander schemaTreeSchemaListener = new SchemaExpander(schemaTree);
resolvingSchemaWalker.walk(schemaTreeSchemaListener, new ConsoleProcessingReport());
SchemaTree resolvedSchemaTree = schemaTreeSchemaListener.getValue();
resolvedSchemaTree should now have all "$ref" in your schema resolved. Hope that answers your question.
from json-schema-validator.
@neerajsu is the ResolvingSchemaWalker still available? I'm not able to find it in the repository. Any idea if it has been removed with more recent version changes? The latest version I can find mention of the ResolvingSchemaWalker is 1.1.8 but the latest version still available on maven central is 1.5.
from json-schema-validator.
Related Issues (20)
- Failed validation for no apparent reason?
- Getting java.util.concurrent.ExecutionException while validating
- Error when validating Json against a schema defining oneOf
- Are we planning on supporting draft-07 HOT 1
- Example code on README.md please plus links to examples in src
- Migrate from joda-time to java 8 time API HOT 2
- Improcess error hanling in getJsonSchema - never squash exceptions
- Android app crashes because schema (v4) cannot be found at runtime HOT 2
- Getting NPE when passing null to validate method.
- Has the functionality of the ResolvingSchemaWalker been replaced in more recent versions?
- Conflicting Transitive Dependency Versions for Guava and Jsr305
- Where is pom.xml ? I want to import this library to my local and customize it
- Not compatible with GraalVM
- NoSuchFieldError on signing
- When JSONObject contains JSONArray, JSONArray's object can not be validated
- draftv4/schema is different between json-schema-validator 2.2.14 and json-schema-core 1.2.14 HOT 2
- release new version HOT 2
- How can we get Error Message with the error path appended?
- Obtain JUST the message from a ValidationMessage?
- BUG: ECMA262 valid pattern throws an exception
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from json-schema-validator.