Coder Social home page Coder Social logo

chatexchange's Introduction

ChatExchange

Github Actions build status for master

A Python3 API for talking to Stack Exchange chat.

  • Supported Python versions (tests run by Github Actions): 3.7, 3.8, 3.9, 3.10, 3.11, 3.12
  • Unclear versions (not run on Github Actions as Github no longer supports them): 2.7 (sic), 3.4, 3.5, 3.6

Dependencies

pip install chatexchange pulls in the following libraries:

  • BeautifulSoup (pip install beautifulsoup4)
  • Requests (pip install requests)
  • python-websockets for the experimental websocket listener (pip install websocket-client). This module is optional; without it, initSocket() from SEChatBrowser will not work.

The package has a number of additional development requirements; install them with

pip install chatexchange[dev]

or .[dev] if you are in the top directory of a local copy of the source.

Shortcuts

  1. make install-dependencies will install the necessary Python package dependencies into your current environment (active virtualenv or system site packages)
  2. make test will run the tests
  3. make run-example will run the example script
  4. make will run the above three in order

License

Licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

chatexchange's People

Contributors

absvolatility avatar artofcode- avatar awegnergithub avatar badp avatar bytecommander avatar csnardi avatar diazona avatar jeremybanks avatar makyen avatar manishearth avatar mego avatar michaelpri10 avatar quartata avatar teward avatar thomas-daniels avatar tripleee avatar undo1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatexchange's Issues

Example `chat.py` crashes on `message.reply()`

I have been able to get the chat.py example working, with one small problem. I tried sending my bot "!!/random", and it crashes:

Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 551, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.7/threading.py", line 504, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/home/ubuntu/tardisbot/ChatExchange/chatexchange/browser.py", line 272, in _runner
    self.on_activity(activity)
  File "/home/ubuntu/tardisbot/ChatExchange/chatexchange/wrapper.py", line 243, in on_activity
    on_event(event, self)
  File "main.py", line 56, in on_message
    message.reply(str(random.random()))
AttributeError: 'MessagePosted' object has no attribute 'reply'

I think this is due to the program returning a MessagePosted object, in stead of a Message object, which has a reply method.

How can I get the message ID of a message I send?

Say I send a message I would like to modify later. Do I have to look at the events to find that message ID (...or way better: the message object), or is there another way?

(It seems that chat does send you the new message ID in response to the POST to chats/.../messages/new, but I don't know how convoluted the process would be to bubble this response all the way to the Rooms class.)

"Limit concurrent jobs" to 1 in Travis settings for this repo

Depending on Travis worker availability, the current configuration could run as many as 8 instances of the test in parallel, one for each version of Python being tested. This can cause Stack Exchange's rate limits be exceeded in some cases, leading to flakey tests. (I think that may be what happened to the last build on master but I'm not sure. I'd like to be able to rule it out, at least.)

It may take longer for the builds to complete, but I think it's probably worth disabling the parallel execution in order to get more trustworthy build results. @Manishearth should consider setting "Limit concurrent jobs" to 1in the Travis interface:

screen shot 2017-03-22 at 1 43 18 am

KeyError that happens at random

When I run my chat bots, then sometimes I get a KeyError, which seems to happen at random.

Traceback:


Traceback (most recent call last):
  File "C:\Python27\lib\threading.py", line 810, in __bootstrap_inner
    self.run()
  File "C:\Python27\lib\threading.py", line 763, in run
    self.__target(*self.__args, **self.__kwargs)
  File "Path\ChatExchange\chatexchange\browser.py", line 625, in _runner
    self.on_activity(json.loads(a))
  File "Path\ChatExchange\chatexchange\rooms.py", line 71, in on_activity
    for event in self._events_from_activity(activity, self.id):
  File "Path\ChatExchange\chatexchange\rooms.py", line 85, in _events_from_activity
    event = events.make(room_event_data, self._client)
  File "Path\ChatExchange\chatexchange\events.py", line 16, in make
    return cls(data, client)
  File "Path\ChatExchange\chatexchange\events.py", line 48, in __init__
    self.room = client.get_room(data['room_id'], name=data['room_name'])
KeyError: 'room_id'

Replace with (modified) ChatExchange6?

As @Manishearth suggested, I now rewrote my fork and renamed it ChatExchange6 as 2ร—3 = 6 and it now supports both Python 2 (Travis-CI successfully tested 2.6) and Python 3 (Travis: 3.4).

I tidied up (==deleted) those old branches and edited some other stuff over time that I am not sure whether you want to lose what you had there or not. Therefore I did not open a pull request but am asking you this way to compare our two versions and decide what you want to merge into your main repository and what not.

Get current list of users in the room

Some applications would benefit from being able to get a list of the users currently in a room. For example, I'd like to be able to do this:

>>> charcoal_hq.users 
[<ChatExchange.chatexchange.users.User object at 0x10c8db550>, 
<ChatExchange.chatexchange.users.User object at 0x10c8db710>, 
<ChatExchange.chatexchange.users.User object at 0x10c8db5d0>,
etc.]

Can we add this feature?

Add a .reply method to MessageEvent

Simply add a .reply(message) method to MessageEvent.

    if message.content.startswith('!!/random'):
        print message
        print "Spawning thread"
        message.reply(str(random.random()))

Message class

(related to some discussion in #43)

We should have a Message class. A MessagePosted event's .message and the corresponding UserMentioned event's .message should both refer to the same Message instance. If a MessageEdited event occurs, it would update this shared instance.

The messages could be de-duplicated using a weak-key dictionary on the Message class.

We will also add a bounded deque (perhaps max_len = 1000) of .recent_events on wrapper, which keep recent Message instances alive, when they're most likely to be referenced again. (Otherwise we could end up creating and destroy several message instances for different events (UserMentioned, MessageEdited) that are really in response to the same action, and we wouldn't be able to automatically aggregate information from multiple associated events.)

Message objects will only use data from events received from rooms we're watching; we aren't going to have any properties which make new requests to the server to retrieve missing data. For all unknown fields (e.g. if we haven't seen an event that tells us the start count of a message), the value will just be None.

The .reply() method on MessageEvent will be moved to Message.

It will also have a .star(value=True) method (see #50).

ValueError: too many values to unpack

Found this because SmokeDetector didn't want to mark some posts as true positive:

File "/path/ChatExchange/chatexchange/browser.py", line 414, in get_transcript_with_message
    room_soup, = transcript_soup.select('.room-name a')

Make developer dependencies optional

As discussed in chat here, pytest eventually shouldn't be a strict dependency for this project. It should still be defined in setup.py, but as optional, either as an extra or through some other mechanism.

Unable to log in properly

It appears that a method is being called twice.

Here's a debuggable version of this method:

@staticmethod
def user_id_and_name_from_link(link_soup):
    user_name = link_soup.text
    print("link soup " + link_soup['href'].split('/')[-2]);
    user_id = int(link_soup['href'].split('/')[-2])
    return user_id, user_name

And the relevant output from it:

link soup 242089
link soup users

What is the problem?

What interface would we ideally want to expose for library users?

Without thinking too much about the details of the implementation, I'm wonder what kind of interface would be nicest to expose to users of the library. I'm currently imagining something like:

import chatexchange

chat = chatexchange.connect( # class ChatConnection
    'chat.stackexchange.com', '[email protected]', 'password')

me = chat.current_user # class User
assert me.user_name == "Jeremy Banks"

manish = chat.get_user(31768)
assert manish.user_name == "ManishEarth"

charcoal_sandbox = chat.get_room(14219) # class Room
assert charcoal_sandbox.name == "Charcoal Chatbot Sandbox"
assert manish in charcoal_sandbox.owners

def on_event(event): # class Event
    assert event.room == charcoal_sandbox
    assert event.room_id == charcoal_sandbox.room_id == 14219

    if event.type != event.Types.message_posted:
        return

    message = event.message # class Message
    # the Message object is shared between multiple events that refer
    # to it; e.g. a message_posted and following message_edited

    if message.text_body == "hello bot":
        charcoal_sandbox.send("thank you; goodbye user")

        watcher.stop()
        # this implicitly terminates the process because the watcher
        # was running the last remaining daemon thread.
    else:
        message.reply("please say `hello bot`")

watcher = sandbox.watch(on_event) # class RoomWatcher
# Watchers are abstracted from the underlying connections. You could
# create have several watchers that are being fed from the same
# socket or polling.

# This interface doesn't directly expose join/leaving rooms; the
# Connection will just ensure we're in a room if we're watching it, or
# if we need to talk in it. We could not worry about leaving rooms at
# this point.

Let me know if you have any thoughts.

Throttling, concurrency, structure

I've been thinking a bit about what to work on next, after Message (and possibly User and Room) are implemented. Here are some rough thoughts I've had. (I'll probably create more specific tickets for associated work, and implementation details, but these topics are pretty related so it would be useful to have initial discussion in one place.)

Throttling

When we post a new message, or edit one, the requests are throttled, and retried when appropriate. This is good, but doesn't apply to any of the other requests we make. We should generalize the existing code, and make it easy to apply for different types of requests. (There would need to be some code specific to recognizing success/temporary error/fatal error for different types of requests.) Ignoring the implementation for a sec, what behaviour do we want?

It might be reasonable to have two requests queues, one for read requests and one for write requests. That way we can keep seeing updates, even while our chat messages are throttled and being retried. Maybe by default they could limit us to one request per five seconds, or maybe a smaller limit that increases if we keep sending a lot of requests. Or maybe the read queue could allow a couple requests to be in-flight at once, while writing is limited to a single request.

Concurrency

The concurrency model of this code might have been sane before I touched it, but given the work I've done I'm sure isn't any more, and that there are probably many possible race conditions that could result in bugs.

For example, users from two different threads could both make requests and read and write from a Message event at the same time, possibly resulting in errors.

I propose that Wrapper ensures that nothing modify its data from outside of a main worker thread. Anything that could modify data will need to be passed in through a queue, which will be processed by that single thread. Wrapper will also manage our connections for throttling. Any public (non-prefixed) methods on Worker should be safe to use from any thread.

Given that you have a message, and you access the missing message.user_name:

  • it calls message.scrape_transcript()
  • which calls wrapper.scrape_transcript_for_message_id(...)
  • which queues a request for the worker thread, then blocks on a response queue
  • the worker thread makes the HTTP request through Browser (it uses the throttling mechanism, so the request may be queued and not take place instantly)
  • once the worker thread gets the response, is updates all of the Messages and other objects that it has learned about
  • the worker thread returns a value through the response queue
  • execution resumes in the initial thread, with the message.user_name value now populated

The Wrapper should also de-duplicate requests made at the same time, when possible. For example, if two different threads both call request_transcript around the same time because a field is missing, wrapper should notice that they want the same information and only make a single request.

Structure

I'd like to clearly define a division of responsibilities between wrapper and client. One possibility is as follows:

chatexchange.Browser (possible alternative name: Connection)

  • provides a clean interface for operations with the chat server
  • contains everything that directly touches the network
  • contains everything that handles raw HTML
  • returns JSON-style objects (str/int/float/dict/list), either as provided by Stack Exchange or scraped from soup
  • does not manage retries or throttling, except to raise appropriate exceptions
  • nothing thread-related
  • could be used by third parties who want to implement their own chat library, without dealing with soup or URLs

chatexchange.Client (suggested rename from Wrapper1)

  • higher-level interface to chat
  • returns nice objects like Event, Message, User, Room
  • all public methods (on Client and on objects returned from public methods) are safe to use from any thread (though they may block)
  • retrying and throttling logic

At this point, it might make sense to delete asyncwrapper;.


1 "Wrapper" sounds like more of an implementation description than an explanation of what it provides, so I'd prefer a different name if one makes sense.

Add .text_content to events that have .content

It would be convenient to have a .text_content property on wrapper.Event which parses the HTML and returns a plain-text version of a message (html/xml entities interpreted, tags removed).

User class

As with Message events in #51, we should have deduplicated User instances. These would be referred-to both directly by Events and by Messages.

Messages would have strong references to their owner Users. Users would have a set of weak references to their messages (perhaps a WeakValueSet).

For both this and our other uses, we probably shouldn't expose weak-referencing collections in public attributes, because of the possible issues with iterating over them. We should just have a property or method that returns a strong copy of the collection.

Because we don't make any explicit requests for information about Users and only use what's available, we might only know Users' IDs and names.

Users should probably have a slug property, which returns the name with spaces and special characters removed, as you'd use for an @-mention in chat, and a .message(room, content) method which uses the slug to attempt to message a user in a given room. (Although this will not be successful if they're not active in the room, because we don't have any super-ping functionality or anything, and this could accidentally ping the wrong user just like @-mentions in chat always can.)

Concurrency/Thread-safety

As discussed in #59, the public Client interface should be entirely thread-safe, whatever the users do. (The current implementation has probably unsafe behaviour internally, without any help from the users.)


The concurrency model of this code might have been sane before I touched it, but given the work I've done I'm sure isn't any more, and that there are probably many possible race conditions that could result in bugs.

For example, users from two different threads could both make requests and read and write from a Message event at the same time, possibly resulting in errors.

I propose that Client ensures that nothing modify its data from outside of a main worker thread. Anything that could modify data will need to be passed in through a queue, which will be processed by that single thread. Client will also manage our connections for throttling. Any public (non-prefixed) methods on Client should be safe to use from any thread.

Given that you have a message, and you access the missing message.user_name:

  • it calls message.scrape_transcript()
  • which queues a request for the worker thread, then blocks on a response queue
  • the worker thread makes the HTTP request through Browser (it uses the throttling > mechanism, so the request may be queued and not take place instantly)
  • once the worker thread gets the response, is updates all of the Messages and > other objects that it has learned about
  • the worker thread returns a value through the response queue
  • execution resumes in the initial thread, with the message.user_name value now > populated

The Client should also de-duplicate requests made at the same time, when possible. For example, if two different threads both call request_transcript around the same time because a field is missing, client should notice that they want the same information and only make a single request.

If a request method was called from the main worker thread, maybe it could recognize that and make the call without using the queue, to prevent a deadlock.

Google OAuth2 Login

Not a priority at all, but since it was mentioned, I thought I'd create a ticket to discuss it.

I don't think there's any official way to login to programmatically log in to Google using a username and password, and it might not be the easiest login flow to work through with something like BeautifulSoup. Instead, we might consider just using a real browser login, requiring user interaction. It won't be suitable for all cases, but it will be suitable for many. (Particularly if the login token could be persisted easily; maybe as part of a general Browser serialization mechanism.)

Hopefully we could just create a temporary local web server to run the OAuth authentication and capture the token.

ConnectionErrors and Timeouts

When watching via HTTP, I have been getting ConnectionErrors and Timeouts.

I've created this patch and it seems to solve most of my issues. The basic strategy is to retry these two particular failures up to 5 times (MAGIC NUMBER ALERT!) and if the failure occurs after these tries, then reraise the exception. If it does not reoccur, blame it on "the network" and be glad that it works.

It is possible this is related to #68 / #69.

AWegnerGitHub@e155cb4

Would you welcome this patch as a pull request?

Relicense under dual MIT/Apache 2.0

We currently use the GPL, which is a very restrictive license, and we'd like to change it to something more permissive (in this case, dual licensed under MIT/Apache 2.0). We'll need consent from all contributors to this repository to do so:

To agree to relicensing, just leave this comment below or otherwise indicate consent:

I license past and future contributions under the dual MIT/Apache-2.0 license, allowing licensees to chose either at their option.

Some more info:

This involves adding the following to the README and including the full text of both licenses in the repository:

## License

Licensed under either of

 * Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)
 * MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)

at your option.

### Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any
additional terms or conditions.

MIT is pretty permissive, so it's preferred by most, however it requires you to include the license in everything using the code. On the other hand, Apache doesn't have this issue, but is incompatible with GPLv2. A dual license gives users the freedom to choose a license of their choice.

Some more rationale can be found in this similar issue.

Deal with the "Allow SE.com/SO.com/M.SE.com to use this openid" page

Since the SE openid is integrated, on some sites there is no confirmation message unless you enter it as a custom openid. We're doing the latter here, so we should probably automatically do the confirmation, OR switch to the built in mechanism (which is a bit roundabout)

Related: #8 as well

Subclasses for different Event types

Instead of having a single Event type, with a long conditional chain that decides what properties to add based on the type of event, we could have a bunch of subclasses of Event for different types, with a constructor that would take even JSON data and return an Event instance of the appropriate subtype (or just an instance of Event of the type is unknown).

@Event.register_type
class MessagePosted(Event):
    type_id = 1

    def __init__(self, data, wrapper=None):
        super(MessagePosted, self).__init__(data, wrapper)
        self.content = data['content']
        self.text_content = _utils.html_to_text(self.content)
        self.user_name = data['user_name']
        self.user_id = data['user_id']
        self.message_id = data['message_id']
from chatexchange import events

event = events.make(data)
if isinstance(event, events.MessagePosted):
    print "Got message:", event.text_content

    assert event.type_id == 1

Because of the number of classes this would add, it might be appropriate to put them in their own chatexchange.events module. But maybe not.

This would be particularly valuable if we ended up implementing an interface as discussed in #43, where different event types would not just have different static attributes but would have different methods as well.

Handle prompt when logging into site for the first time

The first time you use Stack Exchange's login as an OpenID provider to login to a Stack Exchange site, you're met with this prompt asking you for confirmation:

screen shot 2014-04-28 at 7 32 50 pm

ChatExchange currently does not know how to handle this, and you need to manually authenticate it once. I will update it to recognize when it meets this page, and to confirm the login. (If I get lazy, at minimum I will detect when this happens and raise an appropriate error.)

Segmentation faults on PyPy on Travis

I've had a couple of builds fail because of segfaults on PyPy on Travis.

It's possible that this is due to thread-unsafe behaviour that we're planning to remove, but it's doesn't seem to ever be an issue with CPython so I'm going to just switch back to using CPython on Travis for now. We can trying switching back to PyPy once we address potential concurrency issues.

Intermittent login failure

Traceback (most recent call last):
  File "report.py", line 18, in <module>
    wrap.login(username,password)
  File "SEChatWrapper.py", line 49, in login
    self.br.loginChatSE()
  File "SEChatBrowser.py", line 67, in loginChatSE
    authToken = chatlogin.find('input', {"name": "authToken"})['value']
TypeError: 'NoneType' object has no attribute '__getitem__'

Probably due to throttling

Push to PyPI

Using make is so much more complex than using pip. Since there is no C code or other code that needs compiling, there shouldn't be problems pushing it to PyPI.

Chatbot crashes if you star a pinned message

This has happened to me several times. If I have a bot running on a chatroom and if I star a pinned message, my bot crashes with the following traceback:

Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Python27\lib\threading.py", line 810, in __bootstrap_inner
self.run()
File "C:\Python27\lib\threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "build\bdist.win32\egg\chatexchange\browser.py", line 632, in _runner
self.on_activity(activity)
File "build\bdist.win32\egg\chatexchange\rooms.py", line 64, in on_activity
for event in self._events_from_activity(activity, self.id):
File "build\bdist.win32\egg\chatexchange\rooms.py", line 85, in _events_from_activity
event = events.make(room_event_data, self._client)
File "build\bdist.win32\egg\chatexchange\events.py", line 16, in make
return cls(data, client)
File "build\bdist.win32\egg\chatexchange\events.py", line 51, in init
self._init_from_data()
File "build\bdist.win32\egg\chatexchange\events.py", line 82, in _init_from_data
self._update_message()
File "build\bdist.win32\egg\chatexchange\events.py", line 100, in _update_message
del message.pinner_user_ids
AttributeError: pinner_user_ids

Entities not parsed

When message.content is done, it leaves in the entity names. I'm not sure if that's a bug, but I was expecting them to be replaced by their character equivalent.

Travis timing out

Travis is timing out, not sure why

https://travis-ci.org/Manishearth/ChatExchange/builds/34733040

_____________________________ test_room_iterators ______________________________

test/test_rooms.py:33: in test_room_iterators

'stackexchange.com', live_testing.email, live_testing.password)

chatexchange/client.py:69: in __init__

self.login(email, password)

chatexchange/client.py:137: in login

self._br.login_site(self.host)

chatexchange/browser.py:126: in login_site

'openid_identifier': 'https://openid.stackexchange.com/'

chatexchange/browser.py:138: in _se_openid_login_with_fkey

fkey_soup = self.get_soup(fkey_url, with_chat_root=False)

chatexchange/browser.py:74: in get_soup

response = self.get(url, data, headers, with_chat_root)

chatexchange/browser.py:68: in get

return self._request('get', url, data, headers, with_chat_root)

chatexchange/browser.py:58: in _request

url, data=data, headers=headers, timeout=self.request_timeout)

../../../virtualenv/python2.7.8/lib/python2.7/site-packages/requests/sessions.py:395: in get

return self.request('GET', url, **kwargs)

../../../virtualenv/python2.7.8/lib/python2.7/site-packages/requests/sessions.py:383: in request

resp = self.send(prep, **send_kwargs)

../../../virtualenv/python2.7.8/lib/python2.7/site-packages/requests/sessions.py:486: in send

r = adapter.send(request, **kwargs)

../../../virtualenv/python2.7.8/lib/python2.7/site-packages/requests/adapters.py:387: in send

raise Timeout(e)

E Timeout: (<requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0x7f4ca09aac50>, 'Connection to stackexchange.com timed out. (connect timeout=30.0)')

pinner_user_ids does not always exist

Exception in thread Thread-1:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 551, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.7/threading.py", line 504, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/home/manish/SmokeDetector/ChatExchange/chatexchange/browser.py", line 597, in _runner
    self.on_activity(json.loads(a))
  File "/home/manish/SmokeDetector/ChatExchange/chatexchange/rooms.py", line 66, in on_activity
    for event in self._events_from_activity(activity, self.id):
  File "/home/manish/SmokeDetector/ChatExchange/chatexchange/rooms.py", line 80, in _events_from_activity
    event = events.make(room_event_data, self._client)
  File "/home/manish/SmokeDetector/ChatExchange/chatexchange/events.py", line 16, in make
    return cls(data, client)
  File "/home/manish/SmokeDetector/ChatExchange/chatexchange/events.py", line 51, in __init__
    self._init_from_data()
  File "/home/manish/SmokeDetector/ChatExchange/chatexchange/events.py", line 82, in _init_from_data
    self._update_message()
  File "/home/manish/SmokeDetector/ChatExchange/chatexchange/events.py", line 100, in _update_message
    del message.pinner_user_ids
AttributeError: pinner_user_ids

I'll look into this in a while.

Error when trying to call `Client.getme`

When I try to call Client.getme, I get this error:

Traceback (most recent call last):
  File "zalgo.py", line 14, in <module>
    me = client.get_me()
  File "build\bdist.win32\egg\chatexchange\client.py", line 125, in get_me
  File "build\bdist.win32\egg\chatexchange\_utils.py", line 78, in __get__
  File "build\bdist.win32\egg\chatexchange\browser.py", line 235, in _update_chat_fkey_and_user
  File "build\bdist.win32\egg\chatexchange\browser.py", line 218, in _load_user
  File "build\bdist.win32\egg\chatexchange\browser.py", line 226, in user_id_and_name_from_link
ValueError: invalid literal for int() with base 10: 'stackexchange.com'

The code:

import logging
import time

import chatexchange

#logging.basicConfig(level=logging.DEBUG)

with open("D:/CREDENTIALS") as f: # Change the path to link your credential file looking like `<email> <password>`
    s = f.read().split()

email, password = s

client = chatexchange.Client('stackexchange.com', email, password)
me = client.get_me()
sandbox = client.get_room(1)
sandbox.send_message("test from chatexchange")

time.sleep(2)

The credentials are correct, and posting from main account works.

Throttling, Retrying

As discussed in #59, we want to have some universal throttling mechanism for client requests.


Throttling

When we post a new message, or edit one, the requests are throttled, and retried when appropriate. This is good, but doesn't apply to any of the other requests we make. We should generalize the existing code, and make it easy to apply for different types of requests. (There would need to be some code specific to recognizing success/temporary error/fatal error for different types of requests.) Ignoring the implementation for a sec, what behaviour do we want?

It might be reasonable to have two requests queues, one for read requests and one for write requests. That way we can keep seeing updates, even while our chat messages are throttled and being retried. Maybe by default they could limit us to one request per five seconds, or maybe a smaller limit that increases if we keep sending a lot of requests. Or maybe the read queue could allow a couple requests to be in-flight at once, while writing is limited to a single request.

Is the set of fields in event data based on the event type or the message?

Does a MessageEdited event have a message_edits field because it's a MessagedEdited event, or because it's about a message that has been edited?

Does a MessageStarred event include a message_stars field because it's a MessageStarred event, or brecause it's about a message that has stars?

It might be because it's about a message that has stars, because I just got a MessageStarred event without it, after unstarring a message and setting its value to 0.

If it's based on the message, then some logic should be moved into the MessageEvent base class, instead of in the specific subclasses.

This also would mean that the absence of one of the fields is to be taken as specifically saying its value is 0, not that it's unspecified.

Editing messages

I thought starring (#50) would be easier, but it looks like it's not, so let's add the ability to edit messages first.

[Support] When running something using this CE engine in PyCharm, "failed to get `usr` cookie from Stack Exchange OpenID"

I'm not sure if this is a PyCharm issue or not, but what cases can trigger a "None" to be received by ChatExchange engine here, which would cause a failure case? I'm trying to debug some things with SmokeDetector which would require a full debugger instance, and trying to run this in PyCharm's debugger to define my breakpoints, but I always get that "failed to get usr cookie from Stack Exchange OpenID" error, just trying to determine what could cause this.

500 Internal Server Error

Hi Manish,

I am getting this error while connecting to SE.

Traceback (most recent call last):
  File "C:\Shiro\Shiro-master\Shiro-master\bot.py", line 470, in <module>
    main()
  File "C:\Shiro\Shiro-master\Shiro-master\bot.py", line 80, in main
    client.login(email, password)
  File "C:\Shiro\Shiro-master\Shiro-master\chatexchange\client.py", line 140, in login
    self._br.login_site(self.host)
  File "C:\Shiro\Shiro-master\Shiro-master\chatexchange\browser.py", line 169, in login_site
    'openid_identifier': 'https://openid.stackexchange.com/'
  File "C:\Shiro\Shiro-master\Shiro-master\chatexchange\browser.py", line 190, in _se_openid_login_with_fkey
    response = self.post(post_url, data, with_chat_root=False)
  File "C:\Shiro\Shiro-master\Shiro-master\chatexchange\browser.py", line 113, in post
    return self._request('post', url, data, headers, with_chat_root)
  File "C:\Shiro\Shiro-master\Shiro-master\chatexchange\browser.py", line 102, in _request
    response.raise_for_status()
  File "C:\Python27\lib\site-packages\requests\models.py", line 929, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
HTTPError: 500 Server Error: Internal Server Error for url: https://stackexchange.com/error?aspxerrorpath=/users/authenticate/

Any idea's what could be the issue? I have checked everything starting from the user id,password. reputation required for chat room, trying out chat room, manually logging in, etc Not sure what is going wrong here.

Use actual hostname instead of abbreviated host/site parameter

Instead of use abbreviated host/site identifiers, we could use the actual host/domain names of the site. SEChatWrapper(host='stackoverflow.com')

This would allow some of our logic to just use the host value directly, instead of needing to translate from the abbreviated values.

For backwards-compatibility, we can accept the abbreviated names for now, but trigger a DeprecationWarning.

(If completed, I won't merge this directly into master; I'll leave it for @Manishearth to review.)

Using requests version 2.6.2 prevents ChatExchange from logging in with 'Connection aborted.'

When utilizing requests 2.6.2, ChatExchange fails to log in and throws a Connection Aborted error. Reverting to requests 2.5.x solves the problem.

File "H:/test_bot/run_bot.py", line 96, in __init__
  self.client.login(email, password)
File "H:\test_bot\ChatExchange\chatexchange\client.py", line 137, in login
  self._br.login_site(self.host)
File "H:\test_bot\ChatExchange\chatexchange\browser.py", line 151, in login_site
  'openid_identifier': 'https://openid.stackexchange.com/'
File "H:\test_bot\ChatExchange\chatexchange\browser.py", line 172, in _se_openid_login_with_fkey
  response = self.post(post_url, data, with_chat_root=False)
File "H:\test_bot\ChatExchange\chatexchange\browser.py", line 96, in post
  return self._request('post', url, data, headers, with_chat_root)
File "H:\test_bot\ChatExchange\chatexchange\browser.py", line 66, in _request
  url, data=data, headers=headers, timeout=self.request_timeout)
File "H:\python-virtualenvs\temp-requests-failure\lib\site-packages\requests\sessions.py", line 508, in post
  return self.request('POST', url, data=data, json=json, **kwargs)
File "H:\python-virtualenvs\temp-requests-failure\lib\site-packages\requests\sessions.py", line 465, in request
  resp = self.send(prep, **send_kwargs)
File "H:\python-virtualenvs\temp-requests-failure\lib\site-packages\requests\sessions.py", line 594, in send
  history = [resp for resp in gen] if allow_redirects else []
File "H:\python-virtualenvs\temp-requests-failure\lib\site-packages\requests\sessions.py", line 196, in resolve_redirects
  **adapter_kwargs
File "H:\python-virtualenvs\temp-requests-failure\lib\site-packages\requests\sessions.py", line 573, in send
  r = adapter.send(request, **kwargs)
File "H:\python-virtualenvs\temp-requests-failure\lib\site-packages\requests\adapters.py", line 415, in send
  raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', ResponseNotReady())

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.