atifwaqar / pubsubhubbub Goto Github PK
View Code? Open in Web Editor NEWAutomatically exported from code.google.com/p/pubsubhubbub
License: Other
Automatically exported from code.google.com/p/pubsubhubbub
License: Other
#
08-09 10:57PM 32.939
Attempting to confirm subscribe for topic = http://www.google.com/reader
/public/atom/user/07256788297315478906/label/ブログ衆, callback = elided,
verify_token = elided, secret = None, lease_seconds = 2592000
#
E 08-09 10:57PM 32.939
'ascii' codec can't encode characters in position 73-76: ordinal not in
range(128)
Traceback (most recent call last):
File
"/base/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line
509, in __call__
handler.post(*groups)
File
"/base/data/home/apps/pubsubhubbub/injector.335461899033304081/main.py",
line 252, in decorated
return func(myself, *args, **kwargs)
File
"/base/data/home/apps/pubsubhubbub/injector.335461899033304081/main.py",
line 1465, in post
sub.verify_token, sub.secret, sub.lease_seconds):
File
"/base/data/home/apps/pubsubhubbub/injector.335461899033304081/main.py",
line 2278, in execute
return designated_hook(*args, **kwargs)
File
"/base/data/home/apps/pubsubhubbub/injector.335461899033304081/main.py",
line 1327, in confirm_subscription
parsed_url[4] = urllib.urlencode(params)
File "/base/python_dist/lib/python2.5/urllib.py", line 1250, in urlencode
v = quote_plus(str(v))
UnicodeEncodeError: 'ascii' codec can't encode characters in position
73-76: ordinal not in range(128)
Original issue reported on code.google.com by bslatkin
on 10 Aug 2009 at 6:00
SUMMARY:
The spec says in its opening that it uses the keywords "MUST", "SHOULD", "MAY",
etc as defined
in RFC2616. The keywords have definitions very specific to interoperability,
and differ
significantly from the standard English meanings of the words. However in
various places in the
spec these keywords are used to have their standard meaning.
RELEVANT SECTION:
An example:
re: hub.lease_seconds: "Hubs SHOULD make this equal to whatever the subscriber
passed in their
subscription request but MAY change the value depending on the hub's policies."
COMMENT/REQUEST:
Under RFC2616, "MAY" means the choice is entirely up to the client with no ill
effect either way.
"SHOULD" means there's a chance things will break if the rule is not followed.
So the first part of
the above example is saying "Supply the same value unless you're happy to break
some
subscriber implementations" while the second part is saying "supply whatever
value you fancy".
I'm not sure which is the intention, but the spec needs to pick one keyword and
one keyword only
for this sentence.
And as I say, this is only one example. The author(s) need to go through each
sentence
containing one of these keywords and double check it makes sense when read in
the context of
the RFC2616 definition.
Original issue reported on code.google.com by [email protected]
on 9 Aug 2009 at 3:07
#
I 07-11 07:03PM 07.423
Retrieved 179 feed entries, 0 of which have been seen before
#
I 07-11 07:03PM 07.489
Saving 179 new/updated entries
#
E 07-11 07:03PM 07.817
The request to API call datastore_v3.Put() was too large.
Traceback (most recent call last):
File
"/base/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line
503, in __call__
handler.post(*groups)
File
"/base/data/home/apps/pubsubhubbub/hacky-fetch.334811843024886524/main.py",
line 212, in decorated
return func(myself, *args, **kwargs)
File
"/base/data/home/apps/pubsubhubbub/hacky-fetch.334811843024886524/main.py",
line 1525, in post
db.run_in_transaction(lambda: db.put(entities_to_save))
File "/base/python_lib/versions/1/google/appengine/api/datastore.py",
line 1718, in RunInTransaction
DEFAULT_TRANSACTION_RETRIES, function, *args, **kwargs)
File "/base/python_lib/versions/1/google/appengine/api/datastore.py",
line 1809, in RunInTransactionCustomRetries
result = function(*args, **kwargs)
File
"/base/data/home/apps/pubsubhubbub/hacky-fetch.334811843024886524/main.py",
line 1525, in <lambda>
db.run_in_transaction(lambda: db.put(entities_to_save))
File "/base/python_lib/versions/1/google/appengine/ext/db/__init__.py",
line 1098, in put
keys = datastore.Put(entities)
File "/base/python_lib/versions/1/google/appengine/api/datastore.py",
line 164, in Put
apiproxy_stub_map.MakeSyncCall('datastore_v3', 'Put', req, resp)
File
"/base/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py",
line 72, in MakeSyncCall
apiproxy.MakeSyncCall(service, call, request, response)
File
"/base/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py",
line 244, in MakeSyncCall
stub.MakeSyncCall(service, call, request, response)
File "/base/python_lib/versions/1/google/appengine/runtime/apiproxy.py",
line 183, in MakeSyncCall
rpc.CheckSuccess()
File "/base/python_lib/versions/1/google/appengine/api/apiproxy_rpc.py",
line 110, in CheckSuccess
raise self.exception
RequestTooLargeError: The request to API call datastore_v3.Put() was too large.
Original issue reported on code.google.com by bslatkin
on 12 Jul 2009 at 2:34
Find a big publisher to start publishing pings. Preferably an open source
one to whom we could send a patch.
Original issue reported on code.google.com by bradfitz
on 12 Mar 2009 at 12:02
SUMMARY:
Each subsequent call to the hub with mode.subscriber MUST *override*
the previous subscription for a specific (topic URL, callback) tuple.
I think that gives the behavior you want. Subscription = update in the
repeated case.
RELEVANT SECTION: 6.1
COMMENT/REQUEST:
From Tony:
> When a subscriber renews their subscription and supplies a new lease_seconds
> value, is the subscription updated with
>
> exactly the expiry time requested by the new lease_seconds value (if it's
> acceptable to the hub)
> the most-future of the old expiry time and the newly requested expiry time
>
> ?
>
> (I'm guessing the *least-future* of the two isn't even worth considering ;-)
> )
>
> If the latter, subscribers have no way of shortening their lease. For now
> I'm going to implement the former.
Original issue reported on code.google.com by bslatkin
on 14 Jul 2009 at 5:14
Specs are text now. Make them HTML so they're sellable.
Original issue reported on code.google.com by bradfitz
on 12 Mar 2009 at 12:03
Make the front page of pubsubhubbub.googlecode.com pretty & useful.
Link to docs.
Embed presentation.
Original issue reported on code.google.com by bradfitz
on 12 Mar 2009 at 12:03
What steps will reproduce the problem?
1. Go to http://pubsubhubbub.appspot.com/
2. Publish a feed URL (e.g.
http://ssideratos.blogspot.com/feeds/posts/default
and select Publish
(Seemingly Normal Result (204-NO CONTENT))
3. Try and retrieve Publisher Diagnostics by enterring same feed URL in
Publisher Diagnostics Section and selecting "Get Info"
What is the expected output? What do you see instead?
Expected output is information regarding the feed.
What version of the product are you using? On what operating system?
Defect occurs on google hosted hub.
Tested with same results on hub hosted in Google App Engine SDK installed
in Windows Vista 32-bit.
Please provide any additional information below.
Original issue reported on code.google.com by [email protected]
on 13 Aug 2009 at 12:50
SUMMARY:
When verification fails, serve the requesting subscriber a 409. This lets
them test the success of their own subscription.
RELEVANT SECTION: 6.1
COMMENT/REQUEST:
The reference hub implementation returns returns 409 if a subscription
verification fails in sync mode. This is great and should probably be
in the spec so there is a standard way subscribers (or subscriber
agents) can know if verification failed. This would mean if they want
to know, they would not use async mode.
Original issue reported on code.google.com by bslatkin
on 20 Jul 2009 at 6:32
Traceback (most recent call last):
File
"/base/data/home/apps/pubsubhubbub/secrets.334888813608957861/main.py",
line 812, in _enqueue_task
).add(target_queue)
File
"/base/python_lib/versions/1/google/appengine/api/labs/taskqueue/taskqueue.py",
line 495, in add
return Queue(queue_name).add(self)
File
"/base/python_lib/versions/1/google/appengine/api/labs/taskqueue/taskqueue.py",
line 563, in add
self.__TranslateError(e)
File
"/base/python_lib/versions/1/google/appengine/api/labs/taskqueue/taskqueue.py",
line 592, in __TranslateError
raise TransientError(error.error_detail)
TransientError
Original issue reported on code.google.com by bslatkin
on 14 Jul 2009 at 4:35
Add support to open source Movable Type software.
Original issue reported on code.google.com by bradfitz
on 12 Mar 2009 at 12:11
SUMMARY:
From Padraic:
"This could be equally solved by requiring Hubs to send one additional
parameter with a confirm/feed update request:
hub.callback - the Hub's URL, i.e. to identify themselves more directly to
a Subscriber (at present it's all indirect)
In a generic implementation, the key used for storing a verify_token is
just a simple string: topic url + hub url (adding more doesn't make it any
more unique - except perhaps the mode depending on treatment of outstanding
reqs), normalised to a key that's safe to use for the current storage
medium. Other specific implementations might need additional context, but
that falls into the realm of a Subscribers special needs - something a
general implementation would address over the query string as optional (I
allow setting optional params in the Zend implementation)."
RELEVANT SECTION: 6.2
Original issue reported on code.google.com by bslatkin
on 3 Aug 2009 at 4:21
SUMMARY:
The Atom specification defines a fixed list of allowable values for the "rel"
attribute of the link element, as well as an IANA
registry where new values may be registered. An Atom feed that contains a rel
value of "hub" is not therefore currently in
compliance with the Atom spec.
RELEVANT SECTION:
"A feed that acts as a topic as per this specification MUST publish, as a child
of atom:feed, an atom:link element whose rel
attribute has the value hub and whose href"
COMMENT/REQUEST:
The authors of this protocol must make a commitment to properly register their
new relation type with the IANA.
Original issue reported on code.google.com by [email protected]
on 9 Aug 2009 at 2:02
SUMMARY:
Typo, wrong http response code
RELEVANT SECTION: 7.2
COMMENT/REQUEST:
says : the hub MUST return a 204 Accepted response.
should be 202 Accepted . it occurs twice in that paragraph
Original issue reported on code.google.com by [email protected]
on 13 Jul 2009 at 11:37
SUMMARY:
As an extra level of protection and provability, we're going to have Hubs
generate an HMAC signature of their payloads for each subscriber URL.
RELEVANT SECTION: Affects subscription and notification.
COMMENT/REQUEST:
The signature will go in a header like this:
X-Hub-Signature: sha1=12aab312cc492dd149...
For now sha1 MUST be accepted. We may add support for other algorithms in
the future.
Original issue reported on code.google.com by bslatkin
on 16 Jul 2009 at 7:16
This will take a set of items off the to-be-published queue, find all of
the subscribers for the URL, and then start sending off messages to them.
We may need to do sends for a single subscription in multiple chunks, so
there should be some state for this an a notion of progress. This part will
need to use the asynchronous urlfetch API to do publishes in parallel.
Original issue reported on code.google.com by bslatkin
on 22 Aug 2008 at 6:27
SUMMARY:
Right now it's a good idea for hubs to do this, but let's make it even more
clear in the spec.
RELEVANT SECTION: 7.2
COMMENT/REQUEST:
See detail for reasons in
http://code.google.com/p/pubsubhubbub/wiki/PublisherEfficiency#Atom_feed_windowi
ng
Original issue reported on code.google.com by bslatkin
on 20 Jul 2009 at 10:18
From:
http://code.google.com/p/pubsubhubbub/wiki/HubTestSuite
"If your publish and subscribe endpoints are not /publish and
/subscribe respectively, you can specify them with the PUBLISH_PATH
and SUBSCRIBE_PATH environment variables along with HUB_URL."
The hub should just have a single endpoint, which is multiplexed by the
"hub.mode" parameter.
Original issue reported on code.google.com by bslatkin
on 14 Jul 2009 at 9:19
This is to track an extension to the base protocol.
SUMMARY:
Lots of publishers have many variants of a feed they need to publish to at
the same time (RSS, Atom, mobile, podcast, etc). It should be easy for them
to send a single publish event to the Hub and have the Hub fan it out to
all variants and subscribers.
We need a proposal on how to do this on the wiki. Essentially, each feed
would have an ID it would specify. That ID would be used to collate
different variants of a feed as the same feed from the publisher's
perspective. The hard part here is if we want to allow for collating feeds
across multiple domains, in the case feed variants have different origins.
Original issue reported on code.google.com by bslatkin
on 22 Jul 2009 at 5:18
What steps will reproduce the problem?
On a locally installed pubsubhubbub server, using the Subscribe page, any
callback that explicity defines the port number displays an exception in
the console window. Omitting the port number and using the standard port
number works fine. Of course, the callback is properly configured for
each port.
e.g.
http://localhost/mycallback.asp
works fine
but
http://localhost:81/mycallback.asp
causes the exception
What is the expected output? What do you see instead?
This is the traceback generated:
ERROR 2009-08-13 01:44:19,914 main.py:1361] Error encountered while
confirming subscription
Traceback (most recent call last):
File "C:\pubsubhubbub\hub\main.py", line 1359, in confirm_subscription
follow_redirects=False)
File "C:\Program
Files\Google\google_appengine\google\appengine\api\urlfetch.p
y", line 241, in fetch
return rpc.get_result()
File "C:\Program
Files\Google\google_appengine\google\appengine\api\apiproxy_s
tub_map.py", line 458, in get_result
return self.__get_result_hook(self)
File "C:\Program
Files\Google\google_appengine\google\appengine\api\urlfetch.p
y", line 325, in _get_fetch_result
raise DownloadError(str(err))
DownloadError: ApplicationError: 2 (11001, 'getaddrinfo failed')
What version of the product are you using? On what operating system?
pubsubhubserver on Windows Vista 32-bit with google app engine 1.2.4 and
python 2.5.4
Please provide any additional information below.
Original issue reported on code.google.com by [email protected]
on 13 Aug 2009 at 2:19
SUMMARY:
Right now Subscribers need to detect when a feed's self link changes, the
feed has been redirected to a new location, or the Hub changes. Instead,
the Hub could notify the subscribers when this change occurs with an
explicit message that's easier to parse. Subscribers who care can repull
the original feed, detect the new parameters, and then renew their
subscription in the new location.
RELEVANT SECTION: 6.2
COMMENT/REQUEST:
More background info is in this wiki document:
http://code.google.com/p/pubsubhubbub/wiki/MovingFeedsOrChangingHubs
Perhaps we could implement this simply as another subscriber verification
request. In the case the feed URL has changed, we can use "hub.mode" as
"changed", and include the new topic URL in the request. In the case the
Hub has changed, we can use "hub.mode" as "moved".
Original issue reported on code.google.com by bslatkin
on 22 Jul 2009 at 5:11
SUMMARY:
Where multiple keywords are used, their order indicates the subscriber's
order of preference. Hubs MUST ignore verify mode keywords that they do not
understand. Subscribers MUST use at least one of the modes indicated in the
list above, but MAY include additional keywords defined by extension
specifications. Hubs MUST ignore any verify modes they do not understand.
RELEVANT SECTION: 6.1
COMMENT/REQUEST:
"Hubs MUST ignore any verify modes they do not understand." is repeated.
Original issue reported on code.google.com by bslatkin
on 14 Jul 2009 at 5:01
The Hub's URLs should all be accessible over HTTP and HTTPS. This isn't
reflected in the repo's app.yaml.
Original issue reported on code.google.com by bslatkin
on 24 Jul 2009 at 7:40
Take this,
http://code.google.com/p/jfireeagle/
And make it ping a hub.
(Can we get our Fire Eagle Atom URL to point to a hub?)
Original issue reported on code.google.com by bradfitz
on 12 Mar 2009 at 12:25
This will pull from the queue of recently published items, do a urlfetch to
pull the feed, parse the feed as necessary, to pull the latest items from
it and compare to the items we've already seen. Then it will enqueue a new
publish message that should be sent to all subscribers.
Original issue reported on code.google.com by bslatkin
on 22 Aug 2008 at 6:25
Discussion from email:
> Certainly for synchronous verification, I'd like it to try just the once.
> Asynchronous, however, is a bit different -- what's best there? I've been
> thinking about how to treat various 3xx/4xx/5xx response codes during
> publication, too -- perhaps there's some common behaviour we can exploit.
> Something like (pulling this out of the air) this: 3xx should be honoured
> and retried (up to N times, some hub-decided limit at which it gets bored of
> trying?), 4xx should be immediate failure (after all, it's a "client error"
> code!) and 5xx should probably be immediate failure (fail fast?). For
> subscriptions, I'd expect 3xx redirections to be followed as appropriate,
> 4xx to immediately cause subscription cancellation, and 5xx to maybe be
> retried later -- keeping the subscription, but suspending it temporarily to
> let the subscriber recover from their 5xx-producing failure, whatever it is.
> Does this seem reasonable?
I agree that 404 means immediate cancellation, 5xx means to retry. I
think other 4xx errors should cause retries too, because sometimes
400s happen.
As for redirects, I'm not sure if we should allow the subscriber to
move itself at any time. But I do see use-cases. Some issues:
- On delivery notification (a POST) we could only allow redirection
through an HTTP 307 response, which should repeat the POST. That means
we can't issue a 301 redirect and have the method repeated, though.
- Allowing for redirection gives an attacker a pretty easy way to
launch a dos attack. Sign up for a ton of subscriptions, then when
they're delivered, redirect to some unhappy third-party. Granted, I as
the original subscriber would need to soak this load, so the risk is
mitigated.
I'm definitely up for documenting the 307 behavior properly. But that
really doesn't allow for full service moves of a subscriber. We'd need
some other way to indicate permanence. What do you think?
Original issue reported on code.google.com by bslatkin
on 20 Jul 2009 at 5:12
Same idea as synchronous, but this will move subscriptions from the 'needs
verification' state to the 'verified state', instead of doing it
synchronously in the span of an incoming request from the subscribing user.
This will work by getting driven from the outside by a periodic task on an
authenticated handler.
Original issue reported on code.google.com by bslatkin
on 22 Aug 2008 at 6:19
See this exception:
too much contention on these datastore entities. please try again.
Traceback (most recent call last):
File
"/base/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line
503, in __call__
handler.post(*groups)
File
"/base/data/home/apps/pubsubhubbub/secrets.334970643233067897/main.py",
line 241, in decorated
return func(myself, *args, **kwargs)
File
"/base/data/home/apps/pubsubhubbub/secrets.334970643233067897/main.py",
line 1855, in post
work.update(more_subscribers, failed_callbacks)
File
"/base/data/home/apps/pubsubhubbub/secrets.334970643233067897/main.py",
line 1161, in update
self.put()
File "/base/python_lib/versions/1/google/appengine/ext/db/__init__.py",
line 696, in put
return datastore.Put(self._entity)
File "/base/python_lib/versions/1/google/appengine/api/datastore.py",
line 166, in Put
raise _ToDatastoreError(err)
File "/base/python_lib/versions/1/google/appengine/api/datastore.py",
line 2055, in _ToDatastoreError
raise errors[err.application_error](err.error_detail)
TransactionFailedError: too much contention on these datastore entities.
please try again.
Basically we're re-putting the EventToDeliver entity on each chunk size of
N feeds. When we're done with that N, we enqueue another task to handle the
next N more that need to be contacted. The trouble is the entity group
can't have transactions going through it at this high of a rate.
Simple solution:
* Have the continuation task always have a countdown of 1 second to
rate-limit this behavior
* Increase the EVENT_SUBSCRIBER_CHUNK_SIZE constant to increase the
per-iteration latency, reducing the number of needed iterations.
Long-term solution:
* Have one task sequence that iterates through all feeds and another that
actually does delivery
* This will isolate broken callbacks into their own transactional pools
Original issue reported on code.google.com by bslatkin
on 17 Jul 2009 at 11:32
Example: http://feeds2.feedburner.com/adrants
Right now the Hub parser is unhappy with this.
Original issue reported on code.google.com by bslatkin
on 27 Jul 2009 at 6:06
SUMMARY:
The spec's recommendations for Hub and Subscriber's use of HTTP response codes
conflict directly with the
HTTP spec's definitions of those codes.
RELEVANT SECTION:
"If the subscriber does not agree with the action, the subscriber MUST respond
with a 404 "Not Found"
response. The hub MUST consider other client and server response codes (3xx,
4xx, and 5xx) to mean that
the subscription is not verified, meaning the hub SHOULD retry verification
until a definite
acknowledgement (positive or negative) is received."
COMMENT/REQUEST:
Some HTTP response codes (eg 400 and 410) require that the client (ie the Hub)
does not retry the request,
yet the spec here explicitly requires it to. It also assigns a special meaning
to the 404 response code that
doesn't exist in the HTTP spec. Since the entire point of this exercise is that
the Hub may find itself
verifying a cooperative HTTP host that is NOT in any way a PubSubHubbub
subscriber, the PSH spec needs
to stick very closely to the HTTP spec in this area.
And on a related note, the spec also fails to define how the Hub should behave
should it receive a 2xx
code with an incorrect body.
I'd suggest the correct starting point for this part of the protocol is that
anything other than a 200 response
that includes the correct challenge code be considered a dead failure and that
automatic retries should not
be encouraged. Any suggestions beyond that needs to be thought through very
carefully.
Original issue reported on code.google.com by [email protected]
on 9 Aug 2009 at 2:41
Code to send the subscription request synchronously to the server, get back
acks and nacks, handle error cases.
Original issue reported on code.google.com by bslatkin
on 22 Aug 2008 at 6:10
SUMMARY:
> Hmm. The spec makes this optional when (un)subscribing, but mandatory when
> validating a subscription. What if the hub doesn't want to implement an
> auto-lease-expiry? That is, either it can't be bothered dealing with leases
> at all, or it wants on a case-by-case basis to let a lease last for infinity
> seconds. I can think of three possibilities:
>
> let hub.lease_seconds be the empty-string on validation calls to the
> callback (alternatively, let it be "0", and let "0" mean "unbounded"); or
> let hub.lease_seconds be omitted on validation calls to the callback, to be
> interpreted as "unbounded"; or
> the status quo.
RELEVANT SECTION: 6.2
COMMENT/REQUEST:
Yes I think the infinity case makes sense. It seems like 0 or -1 could be a
fine value for the lease_seconds to mean unbounded. I don't think this
should be optional on the subscription verification request though. It
should be clear how long your subscription will last. But knowing it's
infinity sounds good to me.
Otherwise, we could make the parameter also required during the unsubscribe
case. In this case, the lease_seconds parameter could mean how long the
current subscription has before it expires?
Original issue reported on code.google.com by bslatkin
on 14 Jul 2009 at 8:07
Add polling support.
Original issue reported on code.google.com by bradfitz
on 12 Mar 2009 at 12:47
SUMMARY:
Right now we don't say much about HTTPS in the spec. We should explain in
more detail why it's good. We should also talk about how the
atom:link[@rel="hub"] can be a non-https HUB link, and it's up to
subscribers to test the other scheme if they want to have a secure
subscription. But publishers may also only advertise the HTTPS url. I'm not
sure if this should be a best practice or a requirement.
Original issue reported on code.google.com by bslatkin
on 15 Jul 2009 at 7:39
What steps will reproduce the problem?
1. Deploy pubsubhubbub svn
What is the expected output? What do you see instead?
Error parsing yaml file:
Unable to assign value 'every minute' to attribute 'schedule':
schedule 'every minute' failed to parse: line 1:12 mismatched character '<EOF>'
expecting 's'
What version of the product are you using? On what operating system?
pubsubhubbub SVN trunk, App engine Python 1.2.4, Mac OS X 10.6
Please provide any additional information below.
According to
http://code.google.com/appengine/docs/python/config/cron.html#The_Schedule_Forma
t
'every minute' should read 'every 1 minutes' and indeed the following patch
(attached) fixes the
problem.
Original issue reported on code.google.com by [email protected]
on 18 Aug 2009 at 9:46
Attachments:
Write plug-in for open source Wordpress system.
Original issue reported on code.google.com by bradfitz
on 12 Mar 2009 at 12:12
The handler should take in a URL from the publisher and queue it for a
fetch/push at a later time. Probably need to think about dos/throttling for
this as well.
Original issue reported on code.google.com by bslatkin
on 22 Aug 2008 at 6:23
People will ask why there's no RSS support.
Document that so we can send people there rather than explain it ad
nauseam.
Original issue reported on code.google.com by bradfitz
on 12 Mar 2009 at 12:09
SUMMARY:
> hub.callback now "should not" contain query-string or fragment: no fragment
> makes sense (well, if my limited understanding of URL semantics is right),
> but why not permit query-strings? (I would, instead, require hubs to respect
> any potentially existing query-strings when adding parameters to the
> callback URL)
RELEVANT SECTION: 6.1
COMMENT/REQUEST:
We should respect query strings!
Original issue reported on code.google.com by bslatkin
on 14 Jul 2009 at 4:59
SUMMARY:
This is in line with RFC 3986:
http://tools.ietf.org/html/rfc3986#section-2.4
RELEVANT SECTION: Subscription and notification
COMMENT/REQUEST:
Basically, non-reserved characters in URLs should always be decoded by the
Hub for all input. That way, all subscriptions will refer to the same
canonical feed URL regardless of any weird encodings passed in by clients.
Subscribers should also do the right thing, but it's not as important.
Original issue reported on code.google.com by bslatkin
on 4 Aug 2009 at 5:15
Make a bookmarklet that's hard-coded to search current page for Atom rel
and ping hard-coded pubsubhubbub.appspot.com.
(Ideally the bookmarklet would be generic and read the Atom feed to learn
where to ping, but this is a bootstrapping thing, or can be considered a
feature of this hub.....)
Original issue reported on code.google.com by bradfitz
on 12 Mar 2009 at 12:06
SUMMARY:
> Re: 7.3 If, after a content fetch, the hub determines that the topic feed
> has changed. . .
>
> If this is the first content fetch the Hub makes, should the Hub return ONLY
> items newer than time a subscription was made (meaning they would have to
> store that timestamp individually for each sub), return ALL items since they
> are all new in the eyes of the Hub, or just the most recent item to get the
> ball rolling with the subscriber.
RELEVANT SECTION: 7.3
COMMENT/REQUEST:
From me: I think this part of the spec is undefined right now. I'd say it's
up to the hub to decide. Either behavior is not surprising to a subscriber,
so I think it's okay.
Leaving this as a wont-fix bug for now.
Original issue reported on code.google.com by bslatkin
on 14 Jul 2009 at 7:53
Brad might know some people at a big hosted blogging service that might
want to ping and subscribe.
See what they think of spec & code.
Original issue reported on code.google.com by bradfitz
on 12 Mar 2009 at 12:10
SUMMARY:
A few people have split their hubs into /publish and /subscribe paths. A
Hub should serve on a single URL and use the "hub.mode" parameter to
multiplex the request.
RELEVANT SECTION: Somewhere in there.
COMMENT/REQUEST:
Make this very clear!
Original issue reported on code.google.com by bslatkin
on 17 Jul 2009 at 4:54
SUMMARY:
atom:entry contained example feeds does not have required atom:author.
RELEVANT SECTION: 4, 7.4, possibly 5
COMMENT/REQUEST:
Section 4.1.2 of RFC4287 says
> o atom:entry elements MUST contain one or more atom:author elements,
> unless the atom:entry contains an atom:source element that
> contains an atom:author element or, in an Atom Feed Document, the
> atom:feed element contains an atom:author element itself.
Examples should conform to this requirement.
Obviously, atom:author of atom:entry/atom:source in the aggregated feed shoud
respect the one of originated atom:entry/atom:feed.
Original issue reported on code.google.com by [email protected]
on 14 Jul 2009 at 1:53
Example feed: http://feeds.feedburner.jp/junkblog
Top of the feed says: <?xml version="1.0" encoding="EUC-JP"?>
EUC-JP = http://en.wikipedia.org/wiki/Extended_Unix_Code
Background info from:
http://mail.python.org/pipermail/python-list/2008-January/646960.html
"> > Expat doesn't support as many encodings as Python does, and its repertoire
> > of encodings can't be extended; it supports UTF-8, UTF-16, ISO-8859-1
> > (Latin1), and ASCII. If encoding is given it will override the implicit or
> > explicit encoding of the document.
"
Original issue reported on code.google.com by bslatkin
on 10 Jul 2009 at 6:28
SUMMARY:
> "If the subscriber does not agree with the action, the subscriber MUST
> respond with a 404 "Not Found" response. The hub MUST consider other
> client and server response codes (3xx, 4xx, and 5xx) to mean that the
> subscription is not verified, meaning the hub SHOULD retry
> verification until a definite acknowledgement (positive or negative)
> is received."
RELEVANT SECTION: 6.2
COMMENT/REQUEST:
Retries shouldn't be infinite.
Original issue reported on code.google.com by bslatkin
on 14 Jul 2009 at 4:55
Write story for OAuth.
Original issue reported on code.google.com by bradfitz
on 12 Mar 2009 at 12:36
Otherwise we will get blocked by certain publishers. This work will also
give us the basis for collecting polling stats and adjusting the repoll
frequency.
Original issue reported on code.google.com by bslatkin
on 24 Apr 2009 at 9:32
Start playing with XMPP support. Make an XMPP chatbot for subscriptions as
a demo, as a ramp-up to full XEP-0060 support.
Original issue reported on code.google.com by bradfitz
on 12 Mar 2009 at 12:04
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.