slack摘录

kola [7:42 PM]
Hello everyone, I need help with enabling third-party auth.
The steps I’ve taken are:

  • stop edxapp: lms and cms
  • set these variables in lms.env.json
        "AUTH_USE_OPENID_PROVIDER": true,
"ENABLE_COMBINED_LOGIN_REGISTRATION": true,
  • set these in lms.auth.json
 "Google": {
    "SOCIAL_AUTH_GOOGLE_OAUTH2_KEY": "**********",
    "SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET": "***********"
},
  "Facebook": {
    "SOCIAL_AUTH_FACEBOOK_KEY": "**********",
    "SOCIAL_AUTH_FACEBOOK_SECRET": "***********"
}
}```
- ran these commands
```sudo su edxapp -s /bin/bash
cd ~
source edxapp_env
python /edx/app/edxapp/edx-platform/manage.py lms makemigrations --settings=aws
python /edx/app/edxapp/edx-platform/manage.py lms migrate --settings=aws
  • start lms and cms

But the social buttons didn’t show. What could I be doing wrong or what else do I need to do? I tried it on cypress as well as dogwood.

Shohei Maeda [7:53 PM]
@kola: Maybe you need to login Django Admin and enable that module.
Django Admin -> Third_party_auth -> Provider Configuration (OAuth2)
Have you tried this?(edited)
2

kola [7:59 PM]
@smaeda-gacco: Not at all. I’ll try that now.

kola [8:07 PM]
@smaeda-gacco: Thanks a lot. Considering the amount of time I wasted on this following the instructions in open edx documentation, I suggest this should be part of the documentation.

luiz.aoqui [10:07 AM]
hi @tobz i am working with @kola and i think it would be interesting to also document the THIRD_PARTY_AUTH_BACKENDS variable that allows the use of other third party auth backends

[10:09]
you can add it to the lms.env.json file or on ansible you need to declare it inside the EDXAPP_ENV_EXTRA variable, like:

  THIRD_PARTY_AUTH_BACKENDS:
    - 'social.backends.google.GoogleOAuth2'
    - 'social.backends.facebook.FacebookOAuth2'
    - 'social.backends.github.GithubOAuth2'
    - 'social.backends.live.LiveOAuth2'

Shohei Maeda [4:52 PM]
@swapniljio:
Have you tried to set {"USE_DEPRECATED_API": true} in third_party_auth_oauth2providerconfig table(other_settings)?(edited)
1

nachotheix [4:53 PM]
@mulby: Any doc about configure insights in Dogwood

[4:54]
@mulby: insights has been compiled fine and it’s running but link in Instructor section to Insight doesn’t work

Gabe Mulley [9:04 PM]
@nachotheix: no dogwood specific docs

[9:09]
we are starting to gather useful information here: https://openedx.atlassian.net/wiki/display/AN/Analytics+Developer+Docs

[9:09]
it includes links to installation instructions as well

rp_slack [2:12 PM]
I had created the required directory with the images, scss files and footer and header html file in the ‘…/edx-platform/themes’ directory. I also remove the reference to the ‘stanford_theme_enabled’ from line 3 in header html file. Then I put the path to my theme directory as a value for ‘COMPREHENSIVE_THEME_DIR’ in lms.env.json and updated my assets manually.

[2:13]
But now cms is working but lms is showing Bad Request.

[2:15]
@danielmcq: Can you please tell me what is problem, or is there anything else to be done apart from what is mentioned in the document ??

Daniel McQuillen [6:43 PM]
@rp_slack : try adding EDXAPP_LMS_NGINX_PORT: 80 to your /edx/app/edx_ansible/server-vars.yml file and then running update again

Daniel McQuillen [6:44 AM]
@rp_slack You can create a new file here /edx/app/edx_ansible/server-vars.yml. Any variables in that file will be used by the ‘update’ process (see this part of the Fullstack documentation https://github.com/edx/configuration/wiki/edX-Managing-the-Full-Stack#updating-versions-using-edx-repos).

GitHub
edx/configuration
configuration - a simple, but flexible, way for anyone to stand up an instance of the edX platform that is fully configured and ready-to-go

[6:45]
@rp_slack there was a nice blog post by Arif Setiawan on the topic of ’server-vars.yml’ that helped me a lot : http://blog.infinitesquares.net/blog/2015/07/25/customizing-open-edx-settings/
After finished OpenEdX installation we need to modify some settings to change text, logo and activate some functionality: changing site name, using …

@nedbat: Does edX have a way to limit access to to a section or subsection (http://edx.readthedocs.org/projects/open-edx-ca/en/named-release-dogwood.rc/developing_course/course_units.html#hide-a-unit-from-students) for specific group of students, not all students? I would like to group students in a “full access” category that would allow them view the content or not.
MT 07:06:00
@ztraboo: Take a look at “Cohort-Specific Course Content” http://edx.readthedocs.org/projects/edx-partner-course-staff/en/latest/course_features/cohorts/cohorted_courseware.html
MT 07:06:13
@nedbat: For example Trial Access would limit the users to just Sections 1 - 3, 5 for example. Full Access would allow the user to see all Sections.
MT 07:06:28
We’ve used it to provide different content to ID-verified students, and to provide custom content to a group of folks that we knew who were taking the course.
MT 07:08:39
@pdpinch: Thanks a lot. I’ll develop a course with cohorts in mind. What are you typically calling the cohort groups that you create? I called mine Full Access, Trial Access. I noticed on the LMS that I could view the content by these cohorts. That’s helpful.
MT 07:08:49
We’ve left it up to course authors for naming. In one case it was “ID verified” (and everyone else). In the other case it was “Students in Scotland” (or something like that) and

sylvainblot [10:40 PM]
I’m missing the launch-task command

sylvainblot [11:08 PM]
seems to be calling a jar file, someone know where I can find it ? thanks

----- January 15th -----
brian [1:35 AM]
Hi @sylvainblot Can you give a little more information about the particular jar? I think the intention is for https://openedx.atlassian.net/wiki/display/OpenOPS/edX+Analytics+Installation to supercede https://github.com/edx/edx-analytics-pipeline/wiki/Running-the-analytics-backend-locally going forward. However, once things are set up, the next step would be https://github.com/edx/edx-analytics-pipeline/wiki/Tasks-to-Run-to-Update-Insights. Under “performance” there is an entry that mentions “You can find the oddjob jar at https://github.com/jblomo/oddjob/tree/jars”. Is that the jar that you are missing?

sylvainblot [4:19 PM]
Hi @brian thank you, that’s it !

divya.mangotra [5:15 PM]
Hi. I have set up Insights by running the ansible playbook analytics_single.yml. I am able to open the analytics web page by using http://server-ip:18110, but when I try to log in, the page does not get loaded and I get redirected to: http://127.0.0.1:8000/oauth2/authorize/?nonce=yf5iST87RaK2uHcV0oWVtWxMPAXCV4Qe1aBmsuzMfpPeAtgJ25RlYTCbAFJVIbge&state=45vzvUpPhyXu2jMpD4EZg10SoLAyunWv&redirect_uri=http://ServerIp:18110/complete/edx-oidc/&response_type=code&client_id=oauthkey&scope=openid+profile+email+permissions+course_staff

sylvainblot [5:50 PM]
@divya.mangotra: you need to setup oauth2

[5:51]
https://openedx.atlassian.net/wiki/display/AN/Configuring+Insights+for+Open+ID+Connect+SSO+with+LMS

[5:51]
@divya.mangotra: are you using aws ?

divya.mangotra [5:51 PM]
I did using the instructions here: https://openedx.atlassian.net/wiki/display/OpenOPS/edX+Analytics+Installation

[5:51]
Yeah.

[5:51]
Was trying on EC2 instance where fullstack is installed.

sylvainblot [5:52 PM]
ok follow the SSO guide and you will be able to log in

divya.mangotra [5:52 PM]
Okay. Thanks.

----- January 16th -----
sylvainblot [12:27 AM]
I’ve removed the aws task from the provisioning and I was able to install everything, the dashboard is running ect. Now I’m facing a problem trying to run the task locally, can someone confirm if its possible ? I’m getting : http://pastebin.com/wQRXNCRu while running remote-task, any help appreciated ! Thanks
Pastebin
(pipeline)edxtma@analytics-dev:~/edx-analytics-pipeline$ remote-task --host loca - Pastebin.com (19KB)

[12:28]
You can skip line 1-4 the directory was missing anyway

sylvainblot [12:51 AM]
do I have to use launch-task instead ?

----- January 19th -----
Joshua Tseng [2:58 PM]
joined #analytics. Also, @aihua joined.

----- January 22nd -----
james [2:37 PM]
@sylvainblot: You should use the root account,install the pipeline again,the premission will pass!

----- January 24th -----
Abrar Niyazi [1:57 AM]
joined #analytics. Also, @crackdu joined, @liuxing3169 joined, @tmacey joined, @vijay.pahuja.85 joined.

----- January 27th -----
vijay.pahuja.85 [3:27 PM]
hello all, i installed edx insights using analytics_single playbook on my dogwood.rc2 server but /login page shows 404 error. it redirects to http://server-url:18110/login/edx-oidc//oauth2/authorize/?nonce=Bo4mM9j0Li1GqQWS4Vxf7BjliX27WTFh6JoLMGmfCHAP5TxFw69RAD5B3ycSqO2f&state=wrj3Z4Q2ZzuxGXI3wou5TNKHW6pOvoXT&redirect_uri=http://server-url:18110/complete/edx-oidc/&response_type=code&client_id=YOUR_OAUTH2_KEY&scope=openid+profile+email+permissions+course_staff (edited)

vijay.pahuja.85 [7:07 PM]
also need help with this error coming in /edx/var/log/inisights/edx.log
InvalidKeyError: <class ‘opaque_keys.edx.locator.CourseLocator’>

[7:09]
above error comes when i click on link in instructor tab in lms and shows error 500

[7:10]
link ‘try our new insight product’

----- January 28th -----
tikr [9:06 PM]
joined #analytics. Also, @rachell joined, @hurtstotouchfire joined, @wikiwannabe joined.

----- February 1st -----
sylvainblot [5:57 PM]
Hello I’ve installed insight following https://openedx.atlassian.net/wiki/display/OpenOPS/edX+Analytics+Installation everything is ok despite the reports db which is empty. Where can I get the schema or use ansible to force the creation? Thanks !

sylvainblot [11:16 PM]
I tried once again on fresh server and after the provisionning with analytics_single.yml , there is no tables in the reports database. shoudl I fill a bug report ?

----- February 2nd -----
laq_ [5:01 AM]
joined #analytics. Also, @sujanpgowda joined.

----- February 3rd -----
Daniel Friedman [12:16 AM]
@mulby: sounds like the migrate step may not be happening?

Gabe Mulley [12:17 AM]
not exactly

[12:17]
the pipeline creates and maintains that schema

[12:17]
it’s not managed by django migrations

[12:17]
@sylvainblot: it sounds like you haven’t run the pipeline yet

sylvainblot [12:33 AM]
oh ok thanks

[12:35]
@mulby: which user is supposed to run it ?

sylvainblot [12:48 AM]
@mulby: I just did a ImportEnrollmentsIntoMysql without any errors and the reports db on the insight server is still empty

[12:48]
oh no it’s ok :simple_smile:

[12:48]
thanks !!!

Gabe Mulley [3:08 AM]
@sylvainblot: glad to hear it’s working for you

[3:08]
!

timdotabbott [10:02 PM]
joined #analytics. Also, @laq joined.

----- February 5th -----
sylvainblot [4:59 PM]
Hello is there a guide on how to run the AnswerDistributionWorkflow task? the odd jar and so on. Thanks

sylvainblot [10:20 PM]
@mulby: Do you have any information about the manisfest.txt and the oddjar ? how to get them, where to put them

Gabe Mulley [10:20 PM]
manifest.txt is generated by the pipline

[10:20]
you just need to tell it where to put it

[10:20]
which is typically an arbitrary location in HDFS/S3

[10:21]
the oddjob jar is trickier

[10:21]
depending on how you setup your installation it may already be available for you to use

[10:21]
stored in HDFS

[10:23]
it is likely here: hdfs://localhost:9000/edx-analytics-pipeline/packages/edx-analytics-hadoop-util.jar

[10:23]
I had to port oddjob to vanilla java

[10:23]
to get it to compile with newer versions of hadoop

sylvainblot [10:28 PM]
hum ok

[10:28]
in my case I dont have it I’m using the confluence script and I run everything locally

[10:29]
I put that one /edx/app/hadoop/lib/edx-analytics-hadoop-util.jar in the HDFS

[10:30]
remote-task AnswerDistributionWorkflow --n-reduce-tasks 1 --host localhost --user edxtma --remote-name analyticstack --skip-setup --local-scheduler --verbose --wait --src hdfs://localhost:9000/data --dest hdfs://localhost:9000/tmp/pipeline-task-scheduler/AnswerDistributionWorkflow/1449177792/dest --name pt_1449177792 --output-root hdfs://localhost:9000/tmp/pipeline-task-scheduler/AnswerDistributionWorkflow/1449177792/course --include “​tracking.log​.gz” --manifest hdfs://localhost:9000/tmp/pipeline-task-scheduler/AnswerDistributionWorkflow/1449177792/manifest.txt --base-input-format “org.edx.hadoop.input.ManifestTextInputFormat” --lib-jar hdfs://localhost:9000/edx-analytics-pipeline/packages/edx-analytics-hadoop-util.jar --marker hdfs://localhost:9000/tmp/pipeline-task-scheduler/AnswerDistributionWorkflow/1449177792/marker --credentials /edx/etc/edx-analytics-pipeline/output.json

[10:30]
that shoudl do the work ?

Gabe Mulley [10:31 PM]
that looks reasonable

sylvainblot [10:32 PM]
I’ve a strange error about the java process exiting with a 143 code

[10:32]
Its not that verbose, hard for me to track the issue

Gabe Mulley [10:33 PM]
where do you see that error message?

sylvainblot [10:33 PM]
In the command output avec the mapreduce step

[10:34]
It’s running atm I will give you a proper pastebin if you have a minute to look at it

Gabe Mulley [10:36 PM]
sure

[10:36]
I also recommend searching the openedx-analytics google group

[10:37]
since many of the common errors appear there now

[10:37]
in fact, I would recommend you post the error there now if you don’t find it there

[10:37]
so that others can find it if/when they encounter it

sylvainblot [10:38 PM]
Sure

[10:38]
I already had a look at the mailing list, seems like I’m alone on that one :simple_smile:

sylvainblot [10:51 PM]
@mulby: http://pastebin.com/jXt38Dnn
Pastebin
$ . pipeline/bin/activate $ remote-task AnswerDistributionWorkflow --n-reduce-t - Pastebin.com (19KB)

[10:52]
143 error start triggering at line 124

Gabe Mulley [10:53 PM]
alright, so this is Hadoop telling you that it killed one of it’s tasks

[10:53]
2016-02-05 15:40:02,011 INFO 6281 [luigi-interface] hadoop.py:234 - Container [pid=21582,containerID=container_1454580994369_0028_01_000259] is running beyond virtual memory limits. Current usage: 244.6 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing container.

[10:54]
I recommend looking at the detailed logs for that container

[10:54]
to see what is going on

sylvainblot [10:54 PM]
oh thanks !

Gabe Mulley [10:54 PM]
it could be that there is a bug in the code

[10:54]
that is causing it to use too much memory

[10:54]
that your tracking log happens to hit

[10:55]
or it could be that you just have too much data for the machine to handle in a single task

sylvainblot [10:55 PM]
yes I will try to increase the RAM

Gabe Mulley [10:55 PM]
you could also try increasing the number of reduce tasks

sylvainblot [10:55 PM]
1.6G uncompressed

Gabe Mulley [10:55 PM]
–n-reduce-tasks 10

sylvainblot [10:55 PM]
ok

Gabe Mulley [10:55 PM]
frankly I don’t remember what structures are held in memory

[10:55]
for that particular workflow

[10:56]
so it may take some digging to figure that out

[10:56]
in general we try to hold as little in memory as possible

[10:56]
but there are times we do so

[10:56]
since it’s vastly more simple

sylvainblot [10:57 PM]
ok thanks for the help

Gabe Mulley [11:00 PM]
np

sylvainblot [11:26 PM]
@mulby: I’m getting errors related to java that time not to memory :simple_smile: I tried with a signe tracking.log file http://pastebin.com/ktrPkAMc
Pastebin
remote-task AnswerDistributionWorkflow --n-reduce-tasks 10 --host localhost --us - Pastebin.com (19KB)

Gabe Mulley [11:46 PM]
again this is not really a java error

[11:46]
2016-02-05 16:23:22,274 INFO 7350 [luigi-interface] hadoop.py:234 - Job failed as tasks failed. failedMaps:0 failedReduces:1

[11:46]
the job failed

[11:46]
since a reduce task failed

[11:46]
you should look at the logs for the reduce tasks

[11:49]
hadoop dfs -ls /tmp/logs/hadoop/logs/

[11:49]
should show you all of the logs for the various jobs you’ve run

[11:49]
it looks like application_1454684580767_0004 failed

[11:50]
hadoop dfs -cat /tmp/logs/hadoop/logs/application_1454684580767_0004/*

[11:50]
should show you the detailed error logs

sylvainblot [11:51 PM]
thanks I was looking in the hadoop web ui :simple_smile:

[11:56]
there i no such file, i might be missing a remote-task command switch

----- February 6th -----
Gabe Mulley [1:53 AM]
it should be somewhere in HDFS

----- February 9th -----
Akiva Leffert [12:01 AM]
joined #analytics

sylvainblot [12:06 AM]
thanks @mulby I found the reason in log the csv file was already in the hdfs, by chnaging path I managed to run the command

sylvainblot [1:10 AM]
Is there any reason for the performance menu item to not show up? I can’t find a reason from the code.

Ethan Kiczek [2:15 AM]
joined #analytics

sylvainblot [5:57 PM]
(for info you just have to enable the course api switch)

----- February 10th -----
Abdulaziz Ababtain [3:46 AM]
joined #analytics. Also, @fugazi joined, @efagin joined.

----- February 13th -----
Zachary Trabookis [6:08 AM]
@mulby Is there anything special that needs to occur with Cypress upgrading edx-analytics repositories other than pulling latest release tags for Dogwood? (edited)

Gabe Mulley [6:21 AM]
@ztraboo: we haven’t actually figured that out

[6:22]
the level of support we have in Dogwood is that the repos are all tagged together so that people know what to pull if they are standing up a new instance

[6:22]
we would have to review the commits to see if any breaking changes were made

[6:23]
although we discovered today that one of the repos had a stale release branch so its dogwood tag is actually out of date…

Zachary Trabookis [6:25 AM]
@mulby How will the community know when it’s safe to upgrade from Cypress to Dogwood for analytics. Will edX notify this channel?

Zachary Trabookis [6:40 AM]
@mulby I’m assuming that edX is running the latest Dogwood tags in production so it should be fine then right?

Joshua Tseng [9:37 AM]
@mulby: I am interested which repo is out of date?

----- February 14th -----
Gabe Mulley [8:08 AM]
@josh: edx-analytics-data-api

[8:08]
we executed several releases without updating the release branch (accidentally)

[8:08]
and dogwood was tagged off of the release branches

[8:09]
@ztraboo: I would have to double check if we are still running dogwood in production

[8:09]
but if we aren’t we are running something very close to it

[8:10]
we aren’t planning on executing an upgrade from cypress to dogwood ourselves to see if anything breaks

[8:11]
if you do attempt this, I recommend changing the name of the pipeline output database and running all tasks from scratch

[8:12]
and then pointing your dogwood API at the newly built result store database

[8:12]
this is how we execute our 0 downtime releases at edX

[8:12]
we exploit the fact that it’s a batch processing system and that we can regenerate the entire result store from scratch at any moment in time

[8:13]
this setting: https://github.com/edx/edx-analytics-pipeline/blob/master/config/devstack.cfg#L7

GitHub
edx/edx-analytics-pipeline
Contribute to edx-analytics-pipeline development by creating an account on GitHub.

[8:13]
we actually just increment a version number, so we’ll populate “reports_1_0” and then “reports_1_1” etc

Zachary Trabookis [8:47 AM]
@mulby thanks

chembian [7:17 PM]
joined #analytics

----- February 17th -----
Nacho Díaz [8:09 PM]
what’s about insights and new DogWood version?

[8:10]
It’s possible to make it work!

[8:10]
analytics_api RUNNING pid 14126, uptime 21:46:58

[8:11]
insights RUNNING pid 14153, uptime 21:46:58

smarnach [11:03 PM]
joined #analytics

----- February 19th -----
Gabe Mulley [12:11 AM]
@nachotheix: I’m not sure I understand your question

Nacho Díaz [4:53 PM]
@mulby: Any doc about configure insights in Dogwood

[4:54]
@mulby: insights has been compiled fine and it’s running but link in Instructor section to Insight doesn’t work

Gabe Mulley [9:04 PM]
@nachotheix: no dogwood specific docs

[9:09]
we are starting to gather useful information here: https://openedx.atlassian.net/wiki/display/AN/Analytics+Developer+Docs

[9:09]
it includes links to installation instructions as well

----- February 22nd -----
vijay.pahuja.85 [2:48 PM]
i am trying to install insights server trying to follow this doc:
https://openedx.atlassian.net/wiki/display/OpenOPS/edX+Analytics+Installation

[2:49]
I am stuck at

Ensure you’re in the pipeline virtualenv

remote-task --host localhost --repo https://github.com/edx/edx-analytics-pipeline --user ubuntu --override-config $HOME/edx-analytics-pipeline/config/devstack.cfg --wheel-url http://edx-wheelhouse.s3-website-us-east-1.amazonaws.com/Ubuntu/precise --remote-name analyticstack --wait TotalEventsDailyTask --interval 2015 --output-root hdfs://localhost:9000/output/ --local-scheduler

GitHub
edx/edx-analytics-pipeline
Contribute to edx-analytics-pipeline development by creating an account on GitHub.

vijay.pahuja.85 [2:49 PM]
added a Plain Text snippet
.play book runs and says:
PLAY [Configure luigi] ********************************************************
TASK: [luigi | configuration directory created] *******************************
skipping: [localhost]
Add Comment Click to expand inline 21 lines
vijay.pahuja.85 [2:50 PM]
what should it fail at finding home directory

[2:52]
why* should it fail at finding home directory?

sylvainblot [5:36 PM]
@vijay.pahuja.85: are you on aws ?

vijay.pahuja.85 [5:47 PM]
@sylvainblot: no its not* aws. (edited)

sylvainblot [5:48 PM]
then adapt -user ubuntu to your user and use the full path for --override-config

vijay.pahuja.85 [5:52 PM]
thanks @sylvainblot so stupid of me to over look that one! playbook is running ahead.

sylvainblot [5:52 PM]
perfect :simple_smile:

vijay.pahuja.85 [8:35 PM]
@sylvainblot: I have reached last step at SSO command
sudo su edxapp
/edx/bin/python.edxapp /edx/bin/manage.edxapp lms --setting=aws create_oauth2_client http://107.21.156.121:18110 http://107.21.156.121:18110/complete/edx-oidc/ confidential --client_name insights --client_id YOUR_OAUTH2_KEY --client_secret secret --trusted

I need little help understanding --client_name , --client_id and --client_secret.

What settings do --client_name , --client_id and --client_secret correspond to?

----- Yesterday February 23rd, 2016 -----
Pierre Mailhot [4:09 AM]
@mulby: Hi Gabe. Planning to reinstall our Insights server after we migrate our Production to Dogwood next week-end. I assume the repositories tagged with Dogwood will work together? Also planning to use Hadoop 2.x this time.

Gabe Mulley [4:16 AM]
@sambapete: glad you asked “edx-analytics-data-api” was incorrectly tagged

[4:16]
I would use master instead of the dogwood tag

[4:18]
@vijay.pahuja.85: those values correspond to entries in the LMS OAuth trusted clients

[4:18]
err OAuth clients

[4:18]
the LMS issues a client ID and a secret to insights

[4:18]
which it can then use to authenticate users

Pierre Mailhot [4:21 AM]
@mulby: Thanks for the heads up. I thought I saw something last week or the week before about it. It’s on my todo list in the near future.

Pierre Mailhot [4:41 AM]
@mulby: Should I use master only for "edx-analytics-data-api » or for the others too? Just checking.

Gabe Mulley [4:47 AM]
just edx-analytics-data-api

Joshua Tseng [9:37 AM]
@mulby: I am interested which repo is out of date?

----- February 14th -----
Gabe Mulley [8:08 AM]
@josh: edx-analytics-data-api

[8:08]
we executed several releases without updating the release branch (accidentally)

[8:08]
and dogwood was tagged off of the release branches

[8:09]
@ztraboo: I would have to double check if we are still running dogwood in production

[8:09]
but if we aren’t we are running something very close to it

[8:10]
we aren’t planning on executing an upgrade from cypress to dogwood ourselves to see if anything breaks

[8:11]
if you do attempt this, I recommend changing the name of the pipeline output database and running all tasks from scratch

[8:12]
and then pointing your dogwood API at the newly built result store database

[8:12]
this is how we execute our 0 downtime releases at edX

[8:12]
we exploit the fact that it’s a batch processing system and that we can regenerate the entire result store from scratch at any moment in time

[8:13]
this setting: https://github.com/edx/edx-analytics-pipeline/blob/master/config/devstack.cfg#L7

GitHub
edx/edx-analytics-pipeline
Contribute to edx-analytics-pipeline development by creating an account on GitHub.

[8:13]
we actually just increment a version number, so we’ll populate “reports_1_0” and then “reports_1_1” etc

Zachary Trabookis [8:47 AM]
@mulby thanks

apatterson [8:39 AM]
Thanks everyone-i’ve found references to LTI-But it seems like most LMS’s at a minimum support SCORM and AICC (although dated). I was hoping there was some support for either of those

Ned Batchelder [8:41 AM]
apatterson: we don’t support SCORM directly, though I know some people have done some work on converting content. I don’t know what AICC is.

apatterson [8:47 AM]
@nedbat: Thanks Ned. AICC is somewhat of a legacy protocol, but is still supported by many LMS’s due to its stability https://en.wikipedia.org/wiki/Aviation_Industry_Computer-Based_Training_Committee

[8:50]
@nedbat: , certainly would be interested in any documentation or resouces on some form of SCORM implementation-though not directly supported

Ned Batchelder [8:50 AM]
apatterson: i would check the mailing list: https://groups.google.com/forum/#!forum/edx-code

ovnicraft [10:31 AM]
@apatterson: we are looking for SCORM implementation too btw no in our team has developers but scorm experts

[10:31]
now in our*

[10:34]
i found this https://github.com/usernamenumber/xb_scorm/ currently just for testing

GitHub
usernamenumber/xb_scorm
xb_scorm - Experimental xblock for hosting a SCORM content object

@joel: I’m not sure. But here is what I did: I enrolled for verified course, and paid for the certificate, then I unenrolled and a refound object got created. However this refund (refund.models.Refund) is in OPEN state.

Refund will actually be issued when someone calls refund.approve method; this method is called from an ApiView, and I’d like to know what calls said api view?

@jacek You want to go to the Otto dashboard /dashboard > Fulfillment > Refunds

a question about user profiles in edX: am I right that it only captures full names, and not first and last names separately?

Looking at https://github.com/edx/edx-platform/blob/master/common/djangoapps/student/models.py

@pdpinch: Yes, we only capture the full name. You can see the fields we collect here: http://edx.readthedocs.org/projects/edx-guide-for-students/en/latest/sfd_dashboard_profile/SFD_dashboard_settings_profile.html#view-or-change-basic-account-information

tangentially (but not helpful for your issue, peter): https://www.w3.org/International/questions/qa-personal-names

How can edx be configured to allow students to submit documents as homework?

@bradaldridge: I’m not sure why you posted in the news channel, but to answer your question, take a look at ORA (Open Response Assessment) and/or the SGA (Staff Graded Assignment) XBlock. The former is designed for MOOCs, the latter for smaller courses.

http://edx.readthedocs.org/projects/edx-partner-course-staff/en/latest/exercises_tools/open_response_assessments/OpenResponseAssessments.html

This is how I enable user picture in navigation like edx.org on cypress installation http://oonlab.com/edx/code/2016/03/11/profile-picture-open-edx-cypress/

I have a question about certificates in Dogwood. We created a course with an audit mode and a verified mode. Users in the audit mode are still offered certificates when they complete the course. I was under the impressions they should not have received a certificate. We are still using the PDF certificates templates.

@sambapete: I’m not sure what would cause that. There are some things that look a little different in Dogwood with certificates: if you have no course modes at all, master will say, “This course does not use a mode that offers certificates.” On Dogwood, it will let you make certificates, but not preview or activate them. I’m not sure why that is.

@nedbat: I have been looking at the code. I found this which is post-dogwood https://github.com/edx/edx-platform/commit/96cc38951d1db4dc7d3b4111551a8f7b8a1ea46e

@sambapete: do you have the AUDIT_CERT_CUTOFF_DATE setting? that controls whether audit certs are generated.

unfortunately that code is very confusing…

@koljanos: I don’t know. There isn’t information about license https://github.com/openfun/edx-gea
But I think you can :simple_smile: I used it in some projects. Anyway you can ask author about license https://github.com/jpaille

the larger problem is that edx-notes-api is not versioned to dogwood, so we are getting master of edx-notes-api rather than the version of the code that worked with Dogwood.

@jacek @nedbat I wrote a blog post about option.txt error with 2 ways using manual installation and automatic installation using sandbox.sh here http://oonlab.com/edx/code/2016/03/14/fix-optionaltxt-issue-open-edx-dogwood/

anyone can help with this https://groups.google.com/forum/#!topic/edx-code/wpo_xkxvY3U ?

guess this answers the question http://stackoverflow.com/questions/4988580/modelform-django-select-html-problem

@andya: Andy, I answered here: https://openedx.slack.com/archives/front-end/p1457933775000005

Wouter de Vries [5:40 PM]
So I only know of one way to hide courses: change the “Course Visibility In Catalog” to “none” in advanced settings of the course

[5:41]
however, in order for that to work you also need to change something in the common.py file of the LMS (located at /edx/app/edxapp/edx-platform/lms/envs/common.py in the fullstack vagrant install)

[5:42]
COURSE_CATALOG_VISIBILITY_PERMISSION = ‘see_exists’ to COURSE_CATALOG_VISIBILITY_PERMISSION = 'see_in_catalog’
COURSE_CATALOG_VISIBILITY_PERMISSION = ‘see_exists’ to COURSE_ABOUT_VISIBILITY_PERMISSION = ‘see_about_page’

[5:42]
and make sure you are not logged in as a staff member, because then you will see the courses regardless

these are last year’s projects: https://openedx.atlassian.net/wiki/display/OPEN/Project+Presentations

Powered by eduStack & ifLab